renter

package
v1.5.9 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 21, 2022 License: MIT Imports: 51 Imported by: 0

README

Renter

The Renter is responsible for tracking and actively maintaining all of the files that a user has uploaded to Sia. This includes the location and health of these files. The Renter, via the HostDB and the Contractor, is also responsible for picking hosts and maintaining the relationship with them.

The renter is unique for having two different logs. The first is a general renter activity log, and the second is a repair log. The repair log is intended to be a high-signal log that tells users what files are being repaired, and whether the repair jobs have been successful. Where there are failures, the repair log should try and document what those failures were. Every message of the repair log should be interesting and useful to a power user, there should be no logspam and no messages that would only make sense to siad developers.

Testing

Testing the Renter module follows the following guidelines.

  1. file.go will have unit tests in file_test.go
  2. In file_test.go there will be one main test named TestFile. TestFile will have subtests for specific methods, conditions, etc., such as TestFile/method1 or TestFile/condition1.

Since tests are run by providing the package, it is already clear that the tests correspond to the Renter, so Renter in the name is redundant.

An example of a simple test file can be found in refreshpaths_test.go.

An example of a test with a number of subtests can be found in uploadheap_test.go.

Submodules

The Renter has several submodules that each perform a specific function for the Renter. This README will provide brief overviews of the submodules, but for more detailed descriptions of the inner workings of the submodules the respective README files should be reviewed.

  • Contractor
  • Filesystem
  • HostDB
  • Proto
Contractor

The Contractor manages the Renter's contracts and is responsible for all contract actions such as new contract formation and contract renewals. The Contractor determines which contracts are GoodForUpload and GoodForRenew and marks them accordingly.

Filesystem

The Filesystem is responsible for ensuring that all of its supported file formats can be accessed in a threadsafe manner. It doesn't handle any persistence directly but instead relies on the underlying format's package to handle that itself.

HostDB

The HostDB curates and manages a list of hosts that may be useful for the renter in storing various types of data. The HostDB is responsible for scoring and sorting the hosts so that when hosts are needed for contracts high quality hosts are provided.

Proto

The proto module implements the renter's half of the renter-host protocol, including contract formation and renewal RPCs, uploading and downloading, verifying Merkle proofs, and synchronizing revision states. It is a low-level module whose functionality is largely wrapped by the Contractor.

Subsystems

The Renter has the following subsystems that help carry out its responsibilities.

TODO Subsystems need to be alphabetized below to match above list

Bubble Subsystem

Key Files

The bubble subsystem is responsible making sure the updates to the file system's metadata are propagated up to the root directory. A bubble is the process of updating the filesystem metadata for the renter. It is called bubble because when a directory's metadata is updated, a call to update the parent directory will be made. This process continues until the root directory is reached. This results in any changes in metadata being "bubbled" to the top so that the root directory's metadata reflects the status of the entire filesystem.

If during a bubble a file is found that meets the threshold health for repair, a signal is sent to the repair loop. If a stuck chunk is found then a signal is sent to the stuck loop.

Since we are updating the metadata on disk during the bubble calls we want to ensure that only one bubble is being called on a directory at a time. We do this through callQueueBubbleUpdate and managedCompleteBubbleUpdate. The bubbleScheduler has a bubbleUpdates field that tracks all the bubbles and the bubbleStatus. Bubbles can either be queued, active or pending.

When bubble is called on a directory, callQueueBubbleUpdate will check to see if there are any queued, active or pending bubbles for the directory. If there are no bubbles being tracked for that directory then the bubble update is queued and added to the fifo queue. If there is a bubble currently queued or pending for the directory then the update is ignored. If there is a bubble update that is active then the status will be updated to pending.

The bubbleScheduler works through the queued bubble updates in callThreadedProcessBubbleUpdates. When a bubble update is popped from the queue its status is set to active while the bubble is being performed. When the bubble is complete, managedCompleteBubbleUpdate is called.

When managedCompleteBubbleUpdate is called, if the status is active then the update is complete and it is removed from the bubbleScheduler. If the status is pending then the update is added back to the fifo queue with a status of queued.

When a directory is bubbled, the metadata information is recalculated and saved to disk and then bubble is called on the parent directory until the top level directory is reached. During this calculation, every file in the directory is opened, modified, and fsync'd individually.

See benchmark results:

BenchmarkBubbleMetadata runs a benchmark on the perform bubble metadata method

Results (goos, goarch, CPU: Benchmark Output: date)

linux, amd64, Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz:  6 | 180163684 ns/op | 249937 B/op | 1606 allocs/op: 03/19/2020
linux, amd64, Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz: 34 |  34416443 ns/op                                 11/10/2020
linux, amd64, Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz: 15 |  75880486 ns/op                                 02/26/2021
Exports
  • BubbleMetadata
Inbound Complexities
  • callQueueBubbleUpdate is used by external subsystems to trigger a bubble update on a directory.
  • callThreadedProcessBubbleUpdates is called by the Renter on startup to launch the background thread that processes the queued bubble updates.
Outbound Complexities
  • BubbleMetadata calls callPrepareForBubble on a directory when the recursive flag is set to true and then calls callRefreshAll to execute the bubble updates.
  • managedPerformBubbleMetadata calls callRenterContractsAndUtilities to get the contract and utility maps before calling callCalculateDirectoryMetadata
  • The Bubble subsystem triggers the Repair Loop when unhealthy files are found. This is done by managedPerformBubbleMetadata signaling the r.uploadHeap.repairNeeded channel when it is at the root directory and the AggregateHealth is above the RepairThreshold.
  • The Bubble subsystem triggers the Stuck Loop when stuck files are found. This is done by managedPerformBubbleMetadata signaling the r.uploadHeap.stuckChunkFound channel when it is at the root directory and AggregateNumStuckChunks is greater than zero.
Filesystem Controllers

Key Files

TODO

  • fill out subsystem explanation
Outbound Complexities
  • DeleteFile calls callThreadedBubbleMetadata after the file is deleted
  • RenameFile calls callThreadedBubbleMetadata on the current and new directories when a file is renamed
Fuse Subsystem

Key Files

The fuse subsystem enables mounting the renter as a virtual filesystem. When mounted, the kernel forwards I/O syscalls on files and folders to the userland code in this subsystem. For example, the read syscall is implemented by downloading data from Sia hosts.

Fuse is implemented using the hanwen/go-fuse/v2 series of packages, primarily fs and fuse. The fuse package recognizes a single node interface for files and folders, but the renter has two structs, one for files and another for folders. Each the fuseDirnode and the fuseFilenode implement the same Node interfaces.

The fuse implementation is remarkably sensitive to small details. UID mistakes, slow load times, or missing/incorrect method implementations can often destroy an external application's ability to interact with fuse. Currently we use ranger, Nautilus, vlc/mpv, and siastream when testing if fuse is still working well. More programs may be added to this list as we discover more programs that have unique requirements for working with the fuse package.

The siatest/renter suite has two packages which are useful for testing fuse. The first is fuse_test.go, and the second is fusemock_test.go. The first file leverages a testgroup with a renter, a miner, and several hosts to mimic the Sia network, and then mounts a fuse folder which uses the full fuse implementation. The second file contains a hand-rolled implementation of a fake filesystem which implements the fuse interfaces. Both have a commented out sleep at the end of the test which, when uncommented, allows a developer to explore the final mounted fuse folder with any system application to see if things are working correctly.

The mocked fuse is useful for debugging issues related to the fuse implementation. When using the renter implementation, it can be difficult to determine whether something is not working because there is a bug in the renter code, or whether something is not working because the fuse libraries are being used incorrectly. The mocked fuse is an easy way to replicate any desired behavior and check for misunderstandings that the programmer may have about how the fuse librires are meant to be used.

Fuse Manager Subsystem

Key Files

The fuse manager subsystem keeps track of multiple fuse directories that are mounted at the same time. It maintains a list of mountpoints, and maps to the fuse filesystem object that is mounted at those point. Only one folder can be mounted at each mountpoint, but the same folder can be mounted at many mountpoints.

When debugging fuse, it can be helpful to enable the 'Debug' option when mounting a filesystem. This option is commented out in the fuse manager in production, but searching for 'Debug:' in the file will reveal the line that can be uncommented to enable debugging. Be warned that when debugging is enabled, fuse becomes incredibly verbose.

Upon shutdown, the fuse manager will only attempt to unmount each folder one time. If the folder is busy or otherwise in use by another application, the unmount will fail and the user will have to manually unmount using fusermount or umount before that folder becomes available again. To the best of our current knowledge, there is no way to force an unmount.

Persistence Subsystem

Key Files

TODO

  • fill out subsystem explanation
Memory Subsystem

Key Files

The memory subsystem acts as a limiter on the total amount of memory that the renter can use. The memory subsystem does not manage actual memory, it's really just a counter. When some process in the renter wants to allocate memory, it uses the 'Request' method of the memory manager. The memory manager will block until enough memory has been returned to allow the request to be granted. The process is then responsible for calling 'Return' on the memory manager when it is done using the memory.

The memory manager is initialized with a base amount of memory. If a request is made for more than the base memory, the memory manager will block until all memory has been returned, at which point the memory manager will unblock the request. No other memory requests will be unblocked until the large memory sufficiently returned.

Because 'Request' and 'Return' are just counters, they can be called as many times as necessary in whatever sizes are convenient.

When calling 'Request', a process should be sure to request all necessary memory at once, because if a single process calls 'Request' multiple times before returning any memory, this can cause a deadlock between multiple processes that are stuck waiting for more memory before they release memory.

Worker Subsystem

Key Files

The worker subsystem is the interface between the renter and the hosts. All actions (with the exception of some legacy actions that are currently being updated) that involve working with hosts will pass through the worker subsystem.

The Worker Pool

The heart of the worker subsystem is the worker pool, implemented in workerpool.go. The worker pool contains the set of workers that can be used to communicate with the hosts, one worker per host. The function callWorker can be used to retrieve a specific worker from the pool, and the function callUpdate can be used to update the set of workers in the worker pool. callUpdate will create new workers for any new contracts, will update workers for any contracts that changed, and will kill workers for any contracts that are no longer useful.

Inbound Complexities
  • callUpdate should be called on the worker pool any time that that the set of contracts changes or has updates which would impact what actions a worker can take. For example, if a contract's utility changes or if a contract is cancelled.
    • Renter.SetSettings calls callUpdate after changing the settings of the renter. This is probably incorrect, as the actual contract set is updated by the contractor asynchronously, and really callUpdate should be triggered by the contractor as the set of hosts is changed.
    • Renter.threadedDownloadLoop calls callUpdate on each iteration of the outer download loop to ensure that it is always working with the most recent set of hosts. If the contractor is updated to be able to call callUpdate during maintenance, this call becomes unnecessary.
    • Renter.managedRefreshHostsAndWorkers calls callUpdate so that the renter has the latest list of hosts when performing uploads. Renter.managedRefreshHostsAndWorkers is itself called in many places, which means there's substantial complexity between the upload subsystem and the worker subsystem. This complexity can be eliminated by having the contractor being responsible for updating the worker pool as it changes the set of hosts, and also by having the worker pool store host map, which is one of the key reasons Renter.managedRefreshHostsAndWorkers is called so often - this function returns the set of hosts in addition to updating the worker pool.
  • callWorker can be used to fetch a worker and queue work into the worker. The worker can be killed after callWorker has been called but before the returned worker has been used in any way.
    • renter.BackupsOnHost will use callWorker to retrieve a worker that can be used to pull the backups off of a host.
  • callWorkers can be used to fetch the list of workers from the worker pool. It should be noted that it is not safe to lock the worker pool, iterate through the workers, and then call locking functions on the workers. The worker pool must be unlocked if the workers are going to be acquiring locks. Which means functions that loop over the list of workers must fetch that list separately.
The Worker

Each worker in the worker pool is responsible for managing communications with a single host. The worker has an infinite loop where it checks for work, performs any outstanding work, and then sleeps for a wake, kill, or shutdown signal. The implementation for the worker is primarily in worker.go and workerloop.go.

Each type of work that the worker can perform has a queue. A unit of work is called a job. The worker queue and job structure has been re-written multiple times, and not every job has been ported yet to the latest structure. But using the latest structure, you can call queue.callAdd() to add a job to a queue. The worker loop will make all of the decisions around when to execute the job. Jobs are split into two types, serial and async. Serial jobs are anything that requires exclusive access to the file contract with the host, the worker will ensure that only one of these is running at a time. Async jobs are any jobs that don't require exclusive access to a resource, the worker will run multiple of these in parallel.

When a worker wakes or otherwise begins the work loop, the worker will check for each type of work in a specific order, therefore giving certain types of work priority over other types of work. For example, downloads are given priority over uploads. When the worker performs a piece of work, it will jump back to the top of the loop, meaning that a continuous stream of higher priority work can stall out all lower priority work.

When a worker is killed, the worker is responsible for going through the list of jobs that have been queued and gracefully terminating the jobs, returning or signaling errors where appropriate.

workerjobgeneric.go and workerjobgeneric_test.go contain all of the generic code and a basic reference implementation for building a job.

Inbound Complexities
  • callQueueDownloadChunk can be used to schedule a job to participate in a chunk download
    • Renter.managedDistributeDownloadChunkToWorkers will use this method to issue a brand new download project to all of the workers.
    • unfinishedDownloadChunk.managedCleanUp will use this method to re-issue work to workers that are known to have passed on a job previously, but may be required now.
  • callQueueUploadChunk can be used to schedule a job to participate in a chunk upload
    • Renter.managedDistributeChunkToWorkers will use this method to distribute a brand new upload project to all of the workers.
    • unfinishedUploadChunk.managedNotifyStandbyWorkers will use this method to re-issue work to workers that are known to have passed on a job previously, but may be required now.
Outbound Complexities
  • managedPerformDownloadChunkJob is a mess of complexities and needs to be refactored to be compliant with the new subsystem format.
  • managedPerformUploadChunkJob is a mess of complexities and needs to be refactored to be compliant with the new subsystem format.
Download Subsystem

Key Files

TODO

  • expand subsystem description

The download code follows a clean/intuitive flow for getting super high and computationally efficient parallelism on downloads. When a download is requested, it gets split into its respective chunks (which are downloaded individually) and then put into the download heap and download history as a struct of type download.

A download contains the shared state of a download with all the information required for workers to complete it, additional information useful to users and completion functions which are executed upon download completion.

The download history contains a mapping of all of the downloads' UIDs, which are randomly assigned upon initialization to their corresponding download struct. Unless cleared, users can retrieve information about ongoing and completed downloads by either retrieving the full history or a specific download from the history using the API.

The primary purpose of the download heap is to keep downloads on standby until there is enough memory available to send the downloads off to the workers. The heap is sorted first by priority, but then a few other criteria as well.

Some downloads, in particular downloads issued by the repair code, have already had their memory allocated. These downloads get to skip the heap and go straight for the workers.

Before we distribute a download to workers, we check the localPath of the file to see if it available on disk. If it is, and disableLocalFetch isn't set, we load the download from disk instead of distributing it to workers.

When a download is distributed to workers, it is given to every single worker without checking whether that worker is appropriate for the download. Each worker has their own queue, which is bottlenecked by the fact that a worker can only process one item at a time. When the worker gets to a download request, it determines whether it is suited for downloading that particular file. The criteria it uses include whether or not it has a piece of that chunk, how many other workers are currently downloading pieces or have completed pieces for that chunk, and finally things like worker latency and worker price.

If the worker chooses to download a piece, it will register itself with that piece, so that other workers know how many workers are downloading each piece. This keeps everything cleanly coordinated and prevents too many workers from downloading a given piece, while at the same time you don't need a giant messy coordinator tracking everything. If a worker chooses not to download a piece, it will add itself to the list of standby workers, so that in the event of a failure, the worker can be returned to and used again as a backup worker. The worker may also decide that it is not suitable at all (for example, if the worker has recently had some consecutive failures, or if the worker doesn't have access to a piece of that chunk), in which case it will mark itself as unavailable to the chunk.

As workers complete, they will release memory and check on the overall state of the chunk. If some workers fail, they will enlist the standby workers to pick up the slack.

When the final required piece finishes downloading, the worker who completed the final piece will spin up a separate thread to decrypt, decode, and write out the download. That thread will then clean up any remaining resources, and if this was the final unfinished chunk in the download, it'll mark the download as complete.

The download process has a slightly complicating factor, which is overdrive workers. Traditionally, if you need 10 pieces to recover a file, you will use 10 workers. But if you have an overdrive of '2', you will actually use 12 workers, meaning you download 2 more pieces than you need. This means that up to two of the workers can be slow or fail and the download can still complete quickly. This complicates resource handling, because not all memory can be released as soon as a download completes - there may be overdrive workers still out fetching the file. To handle this, a catchall 'cleanUp' function is used which gets called every time a worker finishes, and every time recovery completes. The result is that memory gets cleaned up as required, and no overarching coordination is needed between the overdrive workers (who do not even know that they are overdrive workers) and the recovery function.

By default, the download code organizes itself around having maximum possible throughput. That is, it is highly parallel, and exploits that parallelism as efficiently and effectively as possible. The hostdb does a good job of selecting for hosts that have good traits, so we can generally assume that every host or worker at our disposable is reasonably effective in all dimensions, and that the overall selection is generally geared towards the user's preferences.

We can leverage the standby workers in each unfinishedDownloadChunk to emphasize various traits. For example, if we want to prioritize latency, we'll put a filter in the 'managedProcessDownloadChunk' function that has a worker go standby instead of accept a chunk if the latency is higher than the targeted latency. These filters can target other traits as well, such as price and total throughput.

Download Streaming Subsystem

Key Files

TODO

  • fill out subsystem explanation
Download Project Subsystem

Key Files

The download project subsystem contains all the necessary logic to download a single chunk. Such a project can be initialized with a set of roots, which is what happens for Skynet downloads, or with a Siafile, where we already know what hosts have what roots.

The project will, immediately after it has been initialized, spin up a bunch of jobs that will locate what hosts have what sectors. This is accomplished through 'HasSector' worker jobs. The result of this initial scan is saved in the project's worker state. Every so often this state is being recalculated to ensure we keep up-to-date on the best way to retrieve the file from the network.

Once the project has been initialized it can be used to download data, because we keep the track of the network state it is beneficial to reuse these objects, as it saves the time it takes to scan the network. Downloading data happens through a different download project called the 'ProjectDownloadChunk', or PDC for short.

The PDC will use the network scan performed earlier to launch download jobs on workers that should be able to retrieve the piece. This process consist of two stages, namely the initial launch stage and the overdrive stage. The download code is very clever when selecting the initial set of workers that are being launched, it will take into account historical job timings and will try to make good estimates on how long a worker should take to retrieve the data from its host. This, in combination with a parameter that can be configured by the caller called 'price per millisecond', will be used to construct a set of workers, best suited for the download job. Once these workers have been launched, the second stage will kick in. This stage is called the overdrive stage, and will make sure that consecutive workers will be launched, should a worker in the initial set fail or be late.

Stream Buffer Subsystem

Key Files

The stream buffer subsystem coordinates buffering for a set of streams. Each stream has an LRU which includes both the recently visited data as well as data that is being buffered in front of the current read position. The LRU is implemented in streambufferlru.go.

If there are multiple streams open from the same data source at once, they will share their cache. Each stream will maintain its own LRU, but the data is stored in a common stream buffer. The stream buffers draw their data from a data source interface, which allows multiple different types of data sources to use the stream buffer.

Upload Subsystem

Key Files

TODO

  • expand subsystem description

The Renter uploads siafiles in 40MB chunks. Redundancy kept at the chunk level which means each chunk will then be split in datapieces number of pieces. For example, a 10/20 scheme would mean that each 40MB chunk will be split into 10 4MB pieces, which is turn will be uploaded to 30 different hosts (10 data pieces and 20 parity pieces).

Chunks are uploaded by first distributing the chunk to the worker pool. The chunk is distributed to the worker pool by adding it to the upload queue and then signalling the worker upload channel. Workers that are waiting for work will receive this channel and begin the upload. First the worker creates a connection with the host by creating an editor. Next the editor is used to update the file contract with the next data being uploaded. This will update the merkle root and the contract revision.

Outbound Complexities

  • The upload subsystem calls callThreadedBubbleMetadata from the Health Loop to update the filesystem of the new upload
  • Upload calls callBuildAndPushChunks to add upload chunks to the uploadHeap and then signals the heap's newUploads channel so that the Repair Loop will work through the heap and upload the chunks <<<<<<< HEAD =======

2986bb06b... all: Remove Skynet-related code

Upload Streaming Subsystem

Key Files

TODO

  • fill out subsystem explanation

Inbound Complexities

  • The skyfile subsystem makes three calls to callUploadStreamFromReader() in skyfile.go
  • The snapshot subsystem makes a call to callUploadStreamFromReader()
Health and Repair Subsystem

Key Files

TODO

  • Move HealthLoop and related methods out of repair.go to health.go
  • Pull out repair code from uploadheap.go so that uploadheap.go is only heap related code. Put in repair.go
  • Pull out stuck loop code from uploadheap.go and put in repair.go
  • Review naming of files associated with this subsystem
  • Create benchmark for health loop and add print outs to Health Loop section
  • Break out Health, Repair, and Stuck code into 3 distinct subsystems

There are 3 main functions that work together to make up Sia's file repair mechanism, threadedUpdateRenterHealth, threadedUploadAndRepairLoop, and threadedStuckFileLoop. These 3 functions will be referred to as the health loop, the repair loop, and the stuck loop respectively.

The Health and Repair subsystem operates by scanning aggregate information kept in each directory's metadata. An example of this metadata would be the aggregate filesystem health. Each directory has a field AggregateHealth which represents the worst aggregate health of any file or subdirectory in the directory. Because the field is recursive, the AggregateHealth of the root directory represents the worst health of any file in the entire filesystem. Health is defined as the percent of redundancy missing, this means that a health of 0 is a full health file.

threadedUpdateRenterHealth is responsible for keeping the aggregate information up to date, while the other two loops use that information to decide what upload and repair actions need to be performed.

Health Loops

The health loop is responsible for ensuring that the health of the renter's file directory is updated periodically. Along with the health, the metadata for the files and directories is also updated.

One of the key directory metadata fields that the health loop uses is LastHealthCheckTime and AggregateLastHealthCheckTime. LastHealthCheckTime is the timestamp of when a directory or file last had its health re-calculated during a bubble call. When determining which directory to start with when updating the renter's file system, the health loop follows the path of oldest AggregateLastHealthCheckTime to find the directory or sub tree that is the most out of date. To do this, the health loop uses managedOldestHealthCheckTime. This method starts at the root level of the renter's file system and begins checking the AggregateLastHealthCheckTime of the subdirectories. It then finds which one is the oldest and moves into that subdirectory and continues the search. Once it reaches a directory that either has no subdirectories, or the current directory has an older AggregateLastHealthCheckTime than any of the subdirectories, or it has found a reasonably sized sub tree defined by the health loop constants, it returns that timestamp and the SiaPath of the directory.

Once the health loop has found the most out of date directory or sub tree, it uses the Refresh Paths subsystem to trigger bubble updates that the Bubble subsystem manages. Once the entire renter's directory has been updated within the healthCheckInterval the health loop sleeps until the time interval has passed.

Inbound Complexities

  • The Repair loop relies on Health Loop and the Bubble Subsystem to keep the filesystem accurately updated in order to work through the file system in the correct order.
Repair Loop

The repair loop is responsible for uploading new files to the renter and repairing existing files. The heart of the repair loop is threadedUploadAndRepair, a thread that continually checks for work, schedules work, and then updates the filesystem when work is completed.

The renter tracks backups and siafiles separately, which essentially means the renter has a backup filesystem and a siafile filesystem. As such, we need to check both these filesystems separately with the repair loop. Since the backups are in a different filesystem, the health loop does not check on the backups which means that there are no outside triggers for the repair loop that a backup wasn't uploaded successfully and needs to be repaired. Because of this we always check for backup chunks first to ensure backups are succeeding. There is a size limit on the heap to help check memory usage in check, so by adding backup chunks to the heap first we ensure that we are never skipping over backup chunks due to a full heap.

For the siafile filesystem the repair loop uses a directory heap to prioritize which chunks to add. The directoryHeap is a max heap of directory elements sorted by health. The directory heap is initialized by pushing an unexplored root directory element. As directory elements are popped of the heap, they are explored, which means the directory that was popped off the heap as unexplored gets marked as explored and added back to the heap, while all the subdirectories are added as unexplored. Each directory element contains the health information of the directory it represents, both directory health and aggregate health. If a directory is unexplored the aggregate health is considered, if the directory is explored the directory health is consider in the sorting of the heap. This is to allow us to navigate through the filesystem and follow the path of worse health to find the most in need directories first. When the renter needs chunks to add to the upload heap, directory elements are popped of the heap and chunks are pulled from that directory to be added to the upload heap. If all the chunks that need repairing are added to the upload heap then the directory element is dropped. If not all the chunks that need repair are added, then the directory element is added back to the directory heap with a health equal to the next chunk that would have been added, thus re-prioritizing that directory in the heap.

To build the upload heap for the siafile filesystem, the repair loop checks if the file system is healthy by checking the top directory element in the directory heap. If healthy and there are no chunks currently in the upload heap, then the repair loop sleeps until it is triggered by a new upload or a repair is needed. If the filesystem is in need of repair, chunks are added to the upload heap by popping the directory off the directory heap and adding any chunks that are a worse health than the next directory in the directory heap. This continues until the MaxUploadHeapChunks is met. The repair loop will then repair those chunks and call bubble on the directories that chunks were added from to keep the file system updated. This will continue until the file system is healthy, which means all files have a health less than the RepairThreshold.

When repairing chunks, the Renter will first try and repair the chunk from the local file on disk. If the local file is not present, the Renter will download the needed data from its contracts in order to perform the repair. In order for a remote repair, ie repairing from data downloaded from the Renter's contracts, to be successful the chunk must be at 1x redundancy or better. If a chunk is below 1x redundancy and the local file is not present the chunk, and therefore the file, is considered lost as there is no way to repair it.

NOTE: if the repair loop does not find a local file on disk, it will reset the localpath of the siafile to an empty string. This is done to avoid the siafile being corrupted in the future by a different file being placed on disk at the original localpath location.

Inbound Complexities

  • Upload adds chunks directly to the upload heap by calling callBuildAndPushChunks
  • Repair loop will sleep until work is needed meaning other threads will wake up the repair loop by calling the repairNeeded channel
  • There is always enough space in the heap, or the number of backup chunks is few enough that all the backup chunks are always added to the upload heap.
  • Stuck chunks get added directly to the upload heap and have priority over normal uploads and repairs
  • Streaming upload chunks are added directory to the upload heap and have the highest priority

Outbound Complexities

  • The Repair loop relies on the Health Loop and the Bubble subsystem to keep the filesystem accurately updated in order to work through the file system in the correct order.
  • The repair loop passes chunks on to the upload subsystem and expects that subsystem to handle the request
  • Upload calls callBuildAndPushChunks to add upload chunks to the uploadHeap and then signals the heap's newUploads channel so that the Repair Loop will work through the heap and upload the chunks
Stuck Loop

File's are marked as stuck if the Renter is unable to fully upload the file. While there are many reasons a file might not be fully uploaded, failed uploads due to the Renter, ie the Renter shut down, will not cause the file to be marked as stuck. The goal is to mark a chunk as stuck if it is independently unable to be uploaded. Meaning, this chunk is unable to be repaired but other chunks are able to be repaired. We mark a chunk as stuck so that the repair loop will ignore it in the future and instead focus on chunks that are able to be repaired.

The stuck loop is responsible for targeting chunks that didn't get repaired properly. There are two methods for adding stuck chunks to the upload heap, the first method is random selection and the second is using the stuckStack. On start up the stuckStack is empty so the stuck loop begins using the random selection method. Once the stuckStack begins to fill, the stuck loop will use the stuckStack first before using the random method.

For the random selection one chunk is selected uniformly at random out of all of the stuck chunks in the filesystem. The stuck loop does this by first selecting a directory containing stuck chunks by calling managedStuckDirectory. Then managedBuildAndPushRandomChunk is called to select a file with stuck chunks to then add one stuck chunk from that file to the heap. The stuck loop repeats this process of finding a stuck chunk until there are maxRandomStuckChunksInHeap stuck chunks in the upload heap or it has added maxRandomStuckChunksAddToHeap stuck chunks to the upload heap. Stuck chunks are priority in the heap, so limiting it to maxStuckChunksInHeap at a time prevents the heap from being saturated with stuck chunks that potentially cannot be repaired which would cause no other files to be repaired.

For the stuck loop to begin using the stuckStack there needs to have been successful stuck chunk repairs. If the repair of a stuck chunk is successful, the SiaPath of the SiaFile it came from is added to the Renter's stuckStack and a signal is sent to the stuck loop so that another stuck chunk can added to the heap. The repair loop with continue to add stuck chunks from the stuckStack until there are maxStuckChunksInHeap stuck chunks in the upload heap. Stuck chunks added from the stuckStack will have priority over random stuck chunks, this is determined by setting the fileRecentlySuccessful field to true for the chunk. The stuckStack tracks maxSuccessfulStuckRepairFiles number of SiaFiles that have had stuck chunks successfully repaired in a LIFO stack. If the LIFO stack already has maxSuccessfulStuckRepairFiles in it, when a new SiaFile is pushed onto the stack the oldest SiaFile is dropped from the stack so the new SiaFile can be added. Additionally, if SiaFile is being added that is already being tracked, then the original reference is removed and the SiaFile is added to the top of the Stack. If there have been successful stuck chunk repairs, the stuck loop will try and add additional stuck chunks from these files first before trying to add a random stuck chunk. The idea being that since all the chunks in a SiaFile have the same redundancy settings and were presumably uploaded around the same time, if one chunk was able to be repaired, the other chunks should be able to be repaired as well. Additionally, the reason a LIFO stack is used is because the more recent a success was the higher confidence we have for additional successes.

If the repair wasn't successful, the stuck loop will wait for the repairStuckChunkInterval to pass and then try another random stuck chunk. If the stuck loop doesn't find any stuck chunks, it will sleep until a bubble wakes it up by finding a stuck chunk.

Inbound Complexities

  • Chunk repair code signals the stuck loop when a stuck chunk is successfully repaired
  • The Bubble subsystem signals the stuck loop when AggregateNumStuckChunks for the root directory is > 0

State Complexities

  • The stuck loop and the repair loop use a number of the same methods when building unfinishedUploadChunks to add to the uploadHeap. These methods rely on the repairTarget to know if they should target stuck chunks or unstuck chunks

TODOs

  • once bubbling metadata has been updated to be more I/O efficient this code should be removed and we should call bubble when we clean up the upload chunk after a successful repair.
Backup Subsystem

Key Files

TODO

  • expand subsystem description

The backup subsystem of the renter is responsible for creating local and remote backups of the user's data, such that all data is able to be recovered onto a new machine should the current machine + metadata be lost.

Refresh Paths Subsystem

Key Files

The refresh paths subsystem of the renter is a helper subsystem that tracks the minimum unique paths that need to be refreshed in order to refresh the entire affected portion of the file system.

Inbound Complexities

  • callAdd is used to try and add a new path.
  • callRefreshAll is used to refresh all the directories corresponding to the unique paths in order to update the filesystem

Documentation

Overview

Package renter is responsible for uploading and downloading files on the sia network.

Index

Constants

View Source
const (
	// AlertMSGSiafileLowRedundancy indicates that a file is below 75% redundancy.
	AlertMSGSiafileLowRedundancy = "The SiaFile mentioned in the 'Cause' is below 75% redundancy"
	// AlertSiafileLowRedundancyThreshold is the health threshold at which we start
	// registering the LowRedundancy alert for a Siafile.
	AlertSiafileLowRedundancyThreshold = 0.75
)
View Source
const (
	// DefaultMaxDownloadSpeed is set to zero to indicate no limit, the user
	// can set a custom MaxDownloadSpeed through the API
	DefaultMaxDownloadSpeed = 0

	// DefaultMaxUploadSpeed is set to zero to indicate no limit, the user
	// can set a custom MaxUploadSpeed through the API
	DefaultMaxUploadSpeed = 0
)

Default bandwidth usage parameters.

View Source
const (

	// PersistFilename is the filename to be used when persisting renter
	// information to a JSON file
	PersistFilename = "renter.json"
	// SiaDirMetadata is the name of the metadata file for the sia directory
	SiaDirMetadata = ".siadir"
)
View Source
const (
	// DefaultStreamCacheSize is the default cache size of the /renter/stream cache in
	// chunks, the user can set a custom cache size through the API
	DefaultStreamCacheSize = 2
)

Deprecated consts.

TODO: Tear out all related code and drop these consts.

View Source
const (

	// PriceEstimationSafetyFactor is the factor of safety used in the price
	// estimation to account for any missed costs
	PriceEstimationSafetyFactor = 1.2
)

Constants which don't fit into another category very well.

Variables

View Source
var (
	// ErrBadFile is an error when a file does not qualify as .sia file
	ErrBadFile = errors.New("not a .sia file")
	// ErrIncompatible is an error when file is not compatible with current
	// version
	ErrIncompatible = errors.New("file is not compatible with current version")
	// ErrNoNicknames is an error when no nickname is given
	ErrNoNicknames = errors.New("at least one nickname must be supplied")
	// ErrNonShareSuffix is an error when the suffix of a file does not match
	// the defined share extension
	ErrNonShareSuffix = errors.New("suffix of file must be " + modules.SiaFileExtension)
)
View Source
var (
	// ErrRootNotFound is returned if all workers were unable to recover the
	// root
	ErrRootNotFound = errors.New("workers were unable to recover the data by sector root - all workers failed")

	// ErrProjectTimedOut is returned when the project timed out
	ErrProjectTimedOut = errors.New("project timed out")
)
View Source
var (
	// MaxRegistryReadTimeout is the default timeout used when reading from
	// the registry.
	MaxRegistryReadTimeout = build.Select(build.Var{
		Dev:      30 * time.Second,
		Standard: 5 * time.Minute,
		Testing:  10 * time.Second,
	}).(time.Duration)

	// DefaultRegistryUpdateTimeout is the default timeout used when updating
	// the registry.
	DefaultRegistryUpdateTimeout = build.Select(build.Var{
		Dev:      30 * time.Second,
		Standard: 5 * time.Minute,
		Testing:  3 * time.Second,
	}).(time.Duration)

	// ErrRegistryEntryNotFound is returned if all workers were unable to fetch
	// the entry.
	ErrRegistryEntryNotFound = errors.New("registry entry not found")

	// ErrRegistryLookupTimeout is similar to ErrRegistryEntryNotFound but it is
	// returned instead if the lookup timed out before all workers returned.
	ErrRegistryLookupTimeout = errors.New("registry entry not found within given time")

	// ErrRegistryUpdateInsufficientRedundancy is returned if updating the
	// registry failed due to running out of workers before reaching
	// MinUpdateRegistrySuccess successful updates.
	ErrRegistryUpdateInsufficientRedundancy = errors.New("registry update failed due reach sufficient redundancy")

	// ErrRegistryUpdateNoSuccessfulUpdates is returned if not a single update
	// was successful.
	ErrRegistryUpdateNoSuccessfulUpdates = errors.New("all registry updates failed")

	// ErrRegistryUpdateTimeout is returned when updating the registry was
	// aborted before reaching MinUpdateRegistrySucesses.
	ErrRegistryUpdateTimeout = errors.New("registry update timed out before reaching the minimum amount of updated hosts")

	// MinUpdateRegistrySuccesses is the minimum amount of success responses we
	// require from UpdateRegistry to be valid.
	MinUpdateRegistrySuccesses = build.Select(build.Var{
		Dev:      3,
		Standard: 3,
		Testing:  3,
	}).(int)

	// ReadRegistryBackgroundTimeout is the amount of time a read registry job
	// can stay active in the background before being cancelled.
	ReadRegistryBackgroundTimeout = build.Select(build.Var{
		Dev:      time.Minute,
		Standard: 2 * time.Minute,
		Testing:  5 * time.Second,
	}).(time.Duration)
)
View Source
var (
	// DefaultPauseDuration is the default duration that the repairs and uploads
	// will be paused
	DefaultPauseDuration = build.Select(build.Var{
		Standard: 10 * time.Minute,
		Dev:      1 * time.Minute,
		Testing:  100 * time.Millisecond,
	}).(time.Duration)
)
View Source
var (
	// ErrJobDiscarded is returned by a job if worker conditions have resulted
	// in the worker being able to run this type of job. Perhaps another job of
	// the same type failed recently, or some prerequisite like an ephemeral
	// account refill is not being met. The error may or may not be extended to
	// provide a reason.
	ErrJobDiscarded = errors.New("job is being discarded")
)
View Source
var (
	// ErrUploadDirectory is returned if the user tries to upload a directory.
	ErrUploadDirectory = errors.New("cannot upload directory")
)

Functions

func AlertCauseSiafileLowRedundancy added in v1.5.7

func AlertCauseSiafileLowRedundancy(siaPath modules.SiaPath, health, redundancy float64) string

AlertCauseSiafileLowRedundancy creates a customized "cause" for a siafile with a certain path and health.

func NewDownloadDestinationBuffer added in v1.3.3

func NewDownloadDestinationBuffer() *downloadDestinationBuffer

NewDownloadDestinationBuffer allocates the necessary number of shards for the downloadDestinationBuffer and returns the new buffer.

func NewSectionWriter added in v1.5.7

func NewSectionWriter(w io.WriterAt, off int64, n int64) *sectionWriter

NewSectionWriter returns a sectionWriter that writes to w starting at offset off and stops with EOF after n bytes.

Types

type MockRPCClient added in v1.5.7

type MockRPCClient struct{}

MockRPCClient mocks the RPC Client

func (*MockRPCClient) FundEphemeralAccount added in v1.5.7

func (m *MockRPCClient) FundEphemeralAccount(id modules.AccountID, amount types.Currency) error

FundEphemeralAccount funds the given ephemeral account by given amount.

func (*MockRPCClient) UpdatePriceTable added in v1.5.7

func (m *MockRPCClient) UpdatePriceTable() error

UpdatePriceTable updates the price table.

type RPCClient added in v1.5.7

type RPCClient interface {
	// UpdatePriceTable updates the price table.
	UpdatePriceTable() error
	// FundEphemeralAccount funds the given ephemeral account by given amount.
	FundEphemeralAccount(id modules.AccountID, amount types.Currency) error
}

RPCClient interface lists all possible RPC that can be called on the host

type Renter

type Renter struct {
	// contains filtered or unexported fields
}

A Renter is responsible for tracking all of the files that a user has uploaded to Sia, as well as the locations and health of these files.

func New

func New(g modules.Gateway, cs modules.ConsensusSet, wallet modules.Wallet, tpool modules.TransactionPool, mux *siamux.SiaMux, rl *ratelimit.RateLimit, persistDir string) (*Renter, <-chan error)

New returns an initialized renter.

func NewCustomRenter added in v1.3.3

func NewCustomRenter(g modules.Gateway, cs modules.ConsensusSet, tpool modules.TransactionPool, hdb modules.HostDB, w modules.Wallet, hc hostContractor, mux *siamux.SiaMux, persistDir string, rl *ratelimit.RateLimit, deps modules.Dependencies) (*Renter, <-chan error)

NewCustomRenter initializes a renter and returns it.

func (*Renter) ActiveHosts added in v1.0.0

func (r *Renter) ActiveHosts() ([]modules.HostDBEntry, error)

ActiveHosts returns an array of hostDB's active hosts

func (*Renter) Alerts added in v1.5.7

func (r *Renter) Alerts() (crit, err, warn, info []modules.Alert)

Alerts implements the modules.Alerter interface for the renter. It returns all alerts of the renter and its submodules.

func (*Renter) AllHosts added in v1.0.0

func (r *Renter) AllHosts() ([]modules.HostDBEntry, error)

AllHosts returns an array of all hosts

func (*Renter) BackupsOnHost added in v1.5.7

func (r *Renter) BackupsOnHost(hostKey types.SiaPublicKey) ([]modules.UploadedBackup, error)

BackupsOnHost returns the backups stored on a particular host. This operation can take multiple minutes if the renter is performing many other operations on this host, however this operation is given high priority over other types of operations.

func (*Renter) BubbleMetadata added in v1.5.7

func (r *Renter) BubbleMetadata(siaPath modules.SiaPath, force, recursive bool) error

BubbleMetadata will queue a bubble update for the directory. A bubble update includes calculating the updated values of a directory's metadata, updating the siadir metadata on disk, and then queuing a bubble update for the parent directory. This process will continue until the root directory is reached.

This method is only blocking for the queuing of the bubble, or the preparation of the subtree if recursive is true.

If the recursive boolean is supplied, all sub directories will be queued.

If the force boolean is supplied, the LastHealthCheckTime of the directories will be ignored so all directories will be considered.

func (*Renter) CancelContract added in v1.3.4

func (r *Renter) CancelContract(id types.FileContractID) error

CancelContract cancels a renter's contract by ID by setting goodForRenew and goodForUpload to false

func (*Renter) ClearDownloadHistory added in v1.3.4

func (r *Renter) ClearDownloadHistory(after, before time.Time) error

ClearDownloadHistory clears the renter's download history inclusive of the provided before and after timestamps

TODO: This function can be improved by implementing a binary search, the trick will be making the binary search be just as readable while handling all the edge cases

func (*Renter) Close added in v1.0.0

func (r *Renter) Close() error

Close closes the Renter and its dependencies

func (*Renter) ContractStatus added in v1.5.7

func (r *Renter) ContractStatus(fcID types.FileContractID) (modules.ContractWatchStatus, bool)

ContractStatus returns the status of the given contract within the watchdog, and a bool indicating whether or not it is being monitored.

func (*Renter) ContractUtility added in v1.3.2

func (r *Renter) ContractUtility(pk types.SiaPublicKey) (modules.ContractUtility, bool)

ContractUtility returns the utility field for a given contract, along with a bool indicating if it exists.

func (*Renter) ContractorChurnStatus added in v1.5.7

func (r *Renter) ContractorChurnStatus() modules.ContractorChurnStatus

ContractorChurnStatus returns contract churn stats for the current period.

func (*Renter) Contracts added in v1.0.0

func (r *Renter) Contracts() []modules.RenterContract

Contracts returns an array of host contractor's staticContracts

func (*Renter) CreateBackup added in v1.4.0

func (r *Renter) CreateBackup(dst string, secret []byte) error

CreateBackup creates a backup of the renter's siafiles. If a secret is not nil, the backup will be encrypted using the provided secret.

func (*Renter) CreateDir added in v1.4.0

func (r *Renter) CreateDir(siaPath modules.SiaPath, mode os.FileMode) error

CreateDir creates a directory for the renter

func (*Renter) CurrentPeriod added in v1.1.0

func (r *Renter) CurrentPeriod() types.BlockHeight

CurrentPeriod returns the host contractor's current period

func (*Renter) DeleteDir added in v1.4.0

func (r *Renter) DeleteDir(siaPath modules.SiaPath) error

DeleteDir removes a directory from the renter and deletes all its sub directories and files

func (*Renter) DeleteFile added in v0.3.1

func (r *Renter) DeleteFile(siaPath modules.SiaPath) error

DeleteFile removes a file entry from the renter and deletes its data from the hosts it is stored on.

func (*Renter) DirList added in v1.4.0

func (r *Renter) DirList(siaPath modules.SiaPath) (dis []modules.DirectoryInfo, _ error)

DirList lists the directories in a siadir

func (*Renter) Download

Download creates a file download using the passed parameters and blocks until the download is finished. The download needs to be started by calling the returned method.

func (*Renter) DownloadAsync added in v1.3.3

func (r *Renter) DownloadAsync(p modules.RenterDownloadParameters, f func(error) error) (id modules.DownloadID, start func() error, cancel func(), err error)

DownloadAsync creates a file download using the passed parameters without blocking until the download is finished. The download needs to be started using the method returned by DownloadAsync. DownloadAsync also accepts an optional input function which will be registered to be called when the download is finished.

func (*Renter) DownloadBackup added in v1.5.7

func (r *Renter) DownloadBackup(dst string, name string) (err error)

DownloadBackup downloads the specified backup.

func (*Renter) DownloadByUID added in v1.5.7

func (r *Renter) DownloadByUID(uid modules.DownloadID) (modules.DownloadInfo, bool)

DownloadByUID returns a single download from the history by it's UID.

func (*Renter) DownloadHistory added in v1.3.2

func (r *Renter) DownloadHistory() []modules.DownloadInfo

DownloadHistory returns the list of downloads that have been performed. Will include downloads that have not yet completed. Downloads will be roughly, but not precisely, sorted according to start time.

TODO: Currently the DownloadHistory only contains downloads from this session, does not contain downloads that were executed for the purposes of repairing, and has no way to clear the download history if it gets long or unwieldy. It's not entirely certain which of the missing features are actually desirable, please consult core team + app dev community before deciding what to implement.

func (*Renter) EstimateHostScore added in v1.3.0

EstimateHostScore returns the estimated host score

func (*Renter) File added in v1.3.3

func (r *Renter) File(siaPath modules.SiaPath) (modules.FileInfo, error)

File returns file from siaPath queried by user. Update based on FileList

func (*Renter) FileCached added in v1.5.7

func (r *Renter) FileCached(siaPath modules.SiaPath) (modules.FileInfo, error)

FileCached returns file from siaPath queried by user, using cached values for health and redundancy.

func (*Renter) FileHosts added in v1.5.8

func (r *Renter) FileHosts(sp modules.SiaPath) (hosts []modules.HostDBEntry, _ error)

func (*Renter) FileList

func (r *Renter) FileList(siaPath modules.SiaPath, recursive, cached bool, flf modules.FileListFunc) error

FileList loops over all the files within the directory specified by siaPath and will then call the provided listing function on the file.

func (*Renter) Filter added in v1.5.7

func (r *Renter) Filter() (modules.FilterMode, map[string]types.SiaPublicKey, []string, error)

Filter returns the renter's hostdb's filterMode and filteredHosts

func (*Renter) Host added in v1.1.1

Host returns the host associated with the given public key

func (*Renter) InitRecoveryScan added in v1.4.0

func (r *Renter) InitRecoveryScan() error

InitRecoveryScan starts scanning the whole blockchain for recoverable contracts within a separate thread.

func (*Renter) InitialScanComplete added in v1.3.3

func (r *Renter) InitialScanComplete() (bool, error)

InitialScanComplete returns a boolean indicating if the initial scan of the hostdb is completed.

func (*Renter) LoadBackup added in v1.4.0

func (r *Renter) LoadBackup(src string, secret []byte) (err error)

LoadBackup loads the siafiles of a previously created backup into the renter. If the backup is encrypted, secret will be used to decrypt it. Otherwise the argument is ignored.

func (*Renter) MemoryStatus added in v1.5.7

func (r *Renter) MemoryStatus() (modules.MemoryStatus, error)

MemoryStatus returns the current status of the memory manager

func (*Renter) Mount added in v1.5.7

func (r *Renter) Mount(mountPoint string, sp modules.SiaPath, opts modules.MountOptions) error

Mount mounts the files under the specified siapath under the 'mountPoint' folder on the local filesystem.

func (*Renter) MountInfo added in v1.5.7

func (r *Renter) MountInfo() []modules.MountInfo

MountInfo returns the list of currently mounted fusefilesystems.

func (*Renter) OldContracts added in v1.3.4

func (r *Renter) OldContracts() []modules.RenterContract

OldContracts returns an array of host contractor's oldContracts

func (*Renter) PauseRepairsAndUploads added in v1.5.7

func (r *Renter) PauseRepairsAndUploads(duration time.Duration) error

PauseRepairsAndUploads pauses the renter's repairs and uploads for a time duration

func (*Renter) PeriodSpending added in v1.3.1

func (r *Renter) PeriodSpending() (modules.ContractorSpending, error)

PeriodSpending returns the host contractor's period spending

func (*Renter) PriceEstimation added in v1.1.1

func (r *Renter) PriceEstimation(allowance modules.Allowance) (modules.RenterPriceEstimation, modules.Allowance, error)

PriceEstimation estimates the cost in siacoins of performing various storage and data operations. The estimation will be done using the provided allowance, if an empty allowance is provided then the renter's current allowance will be used if one is set. The final allowance used will be returned.

func (*Renter) ProcessConsensusChange added in v1.3.1

func (r *Renter) ProcessConsensusChange(cc modules.ConsensusChange)

ProcessConsensusChange returns the process consensus change

func (*Renter) ReadRegistry added in v1.5.7

func (r *Renter) ReadRegistry(spk types.SiaPublicKey, tweak crypto.Hash, timeout time.Duration) (modules.SignedRegistryValue, error)

ReadRegistry starts a registry lookup on all available workers. The jobs have 'timeout' amount of time to finish their jobs and return a response. Otherwise the response with the highest revision number will be used.

func (*Renter) RecoverableContracts added in v1.4.0

func (r *Renter) RecoverableContracts() []modules.RecoverableContract

RecoverableContracts returns the host contractor's recoverable contracts.

func (*Renter) RecoveryScanStatus added in v1.4.0

func (r *Renter) RecoveryScanStatus() (bool, types.BlockHeight)

RecoveryScanStatus returns a bool indicating if a scan for recoverable contracts is in progress and if it is, the current progress of the scan.

func (*Renter) RefreshedContract added in v1.5.7

func (r *Renter) RefreshedContract(fcid types.FileContractID) bool

RefreshedContract returns a bool indicating if the contract was previously refreshed

func (*Renter) RenameDir added in v1.5.7

func (r *Renter) RenameDir(oldPath, newPath modules.SiaPath) error

RenameDir takes an existing directory and changes the path. The original directory must exist, and there must not be any directory that already has the replacement path. All sia files within directory will also be renamed

func (*Renter) RenameFile added in v0.3.1

func (r *Renter) RenameFile(currentName, newName modules.SiaPath) error

RenameFile takes an existing file and changes the nickname. The original file must exist, and there must not be any file that already has the replacement nickname.

func (*Renter) ResumeRepairsAndUploads added in v1.5.7

func (r *Renter) ResumeRepairsAndUploads() error

ResumeRepairsAndUploads resumes the renter's repairs and uploads

func (*Renter) ScoreBreakdown added in v1.1.1

func (r *Renter) ScoreBreakdown(e modules.HostDBEntry) (modules.HostScoreBreakdown, error)

ScoreBreakdown returns the score breakdown

func (*Renter) SetFileStuck added in v1.5.7

func (r *Renter) SetFileStuck(siaPath modules.SiaPath, stuck bool) (err error)

SetFileStuck sets the Stuck field of the whole siafile to stuck.

func (*Renter) SetFileTrackingPath added in v1.3.5

func (r *Renter) SetFileTrackingPath(siaPath modules.SiaPath, newPath string) (err error)

SetFileTrackingPath sets the on-disk location of an uploaded file to a new value. Useful if files need to be moved on disk. SetFileTrackingPath will check that a file exists at the new location and it ensures that it has the right size, but it can't check that the content is the same. Therefore the caller is responsible for not accidentally corrupting the uploaded file by providing a different file with the same size.

func (*Renter) SetFilterMode added in v1.4.0

func (r *Renter) SetFilterMode(lm modules.FilterMode, hosts []types.SiaPublicKey, netAddresses []string) error

SetFilterMode sets the renter's hostdb filter mode

func (*Renter) SetIPViolationCheck added in v1.4.0

func (r *Renter) SetIPViolationCheck(enabled bool)

SetIPViolationCheck is a passthrough method to the hostdb's method of the same name.

func (*Renter) SetSettings added in v1.0.0

func (r *Renter) SetSettings(s modules.RenterSettings) error

SetSettings will update the settings for the renter.

NOTE: This function can't be atomic. Typically we try to have user requests be atomic, so that either everything changes or nothing changes, but since these changes happen progressively, it's possible for some of the settings (like the allowance) to succeed, but then if the bandwidth limits for example are bad, then the allowance will update but the bandwidth will not update.

func (*Renter) Settings added in v1.0.0

func (r *Renter) Settings() (modules.RenterSettings, error)

Settings returns the Renter's current settings.

func (*Renter) Streamer added in v1.3.3

func (r *Renter) Streamer(siaPath modules.SiaPath, disableLocalFetch bool) (_ string, _ modules.Streamer, err error)

Streamer creates a modules.Streamer that can be used to stream downloads from the sia network.

func (*Renter) StreamerByNode added in v1.5.7

func (r *Renter) StreamerByNode(node *filesystem.FileNode, disableLocalFetch bool) (modules.Streamer, error)

StreamerByNode will open a streamer for the renter, taking a FileNode as input instead of a siapath. This is important for fuse, which has filenodes that could be getting renamed before the streams are opened.

func (*Renter) Unmount added in v1.5.7

func (r *Renter) Unmount(mountPoint string) error

Unmount unmounts the fuse filesystem currently mounted at mountPoint.

func (*Renter) UpdateRegistry added in v1.5.7

func (r *Renter) UpdateRegistry(spk types.SiaPublicKey, srv modules.SignedRegistryValue, timeout time.Duration) error

UpdateRegistry updates the registries on all workers with the given registry value.

func (*Renter) Upload

func (r *Renter) Upload(up modules.FileUploadParams) error

Upload instructs the renter to start tracking a file. The renter will automatically upload and repair tracked files using a background loop.

func (*Renter) UploadBackup added in v1.5.7

func (r *Renter) UploadBackup(src, name string) error

UploadBackup creates a backup of the renter which is uploaded to the sia network as a snapshot and can be retrieved using only the seed.

func (*Renter) UploadStreamFromReader added in v1.5.7

func (r *Renter) UploadStreamFromReader(up modules.FileUploadParams, reader io.Reader) error

UploadStreamFromReader reads from the provided reader until io.EOF is reached and upload the data to the Sia network.

func (*Renter) UploadedBackups added in v1.5.7

func (r *Renter) UploadedBackups() ([]modules.UploadedBackup, []types.SiaPublicKey, error)

UploadedBackups returns the backups that the renter can download, along with a list of which contracts are storing all known backups.

func (*Renter) WorkerPoolStatus added in v1.5.7

func (r *Renter) WorkerPoolStatus() (modules.WorkerPoolStatus, error)

WorkerPoolStatus returns the current status of the Renter's worker pool

type StreamShard added in v1.5.7

type StreamShard struct {
	// contains filtered or unexported fields
}

StreamShard is a helper type that allows us to split an io.Reader up into multiple readers, wait for the shard to finish reading and then check the error for that Read. SignalChan will be closed when the shard has been closed.

func NewStreamShard added in v1.5.7

func NewStreamShard(r io.Reader, peek []byte) *StreamShard

NewStreamShard creates a new stream shard from a reader.

func (*StreamShard) Close added in v1.5.7

func (ss *StreamShard) Close() error

Close closes the underlying channel of the shard.

func (*StreamShard) Peek added in v1.5.7

func (ss *StreamShard) Peek() ([]byte, error)

Peek will check to see if there is more data in the stream.

func (*StreamShard) Read added in v1.5.7

func (ss *StreamShard) Read(b []byte) (int, error)

Read implements the io.Reader interface.

func (*StreamShard) Result added in v1.5.7

func (ss *StreamShard) Result() (int, error)

Result returns the returned values of calling Read on the shard.

Directories

Path Synopsis
Package hostdb provides a HostDB object that implements the renter.hostDB interface.
Package hostdb provides a HostDB object that implements the renter.hostDB interface.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL