files

package
v0.54.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 17, 2024 License: Apache-2.0 Imports: 12 Imported by: 3

Documentation

Overview

These APIs allow you to manage Dbfs, Files, etc.

Databricks File System (DBFS) API

We recommend using a client created via [databricks.NewWorkspaceClient] to simplify the configuration experience.

Reading and writing files

You can open a file on DBFS for reading or writing with DbfsAPI.Open. This function returns a Handle that is compatible with a subset of io interfaces for reading, writing, and closing.

Uploading a file from an io.Reader:

upload, _ := os.Open("/path/to/local/file.ext")
remote, _ := w.Dbfs.Open(ctx, "/path/to/remote/file", files.FileModeWrite|files.FileModeOverwrite)
io.Copy(remote, upload)
remote.Close()

Downloading a file to an io.Writer:

download, _ := os.Create("/path/to/local")
remote, _ := w.Dbfs.Open(ctx, "/path/to/remote/file", files.FileModeRead)
_ = io.Copy(download, remote)

Reading and writing files from buffers

You can read from or write to a DBFS file directly from a byte slice through the convenience functions DbfsAPI.ReadFile and DbfsAPI.WriteFile.

Uploading a file from a byte slice:

buf := []byte("Hello world!")
_ = w.Dbfs.WriteFile(ctx, "/path/to/remote/file", buf)

Downloading a file into a byte slice:

buf, err := w.Dbfs.ReadFile(ctx, "/path/to/remote/file")

Moving files

err := w.Dbfs.Move(ctx, files.Move{
	SourcePath:      "/remote/src/path",
	DestinationPath: "/remote/dst/path",
})

Creating directories

w.Dbfs.MkdirsByPath(ctx, "/remote/dir/path")

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AddBlock

type AddBlock struct {
	// The base64-encoded data to append to the stream. This has a limit of 1
	// MB.
	Data string `json:"data"`
	// The handle on an open stream.
	Handle int64 `json:"handle"`
}

type AddBlockResponse added in v0.34.0

type AddBlockResponse struct {
}

type Close

type Close struct {
	// The handle on an open stream.
	Handle int64 `json:"handle"`
}

type CloseResponse added in v0.34.0

type CloseResponse struct {
}

type Create

type Create struct {
	// The flag that specifies whether to overwrite existing file/files.
	Overwrite bool `json:"overwrite,omitempty"`
	// The path of the new file. The path should be the absolute DBFS path.
	Path string `json:"path"`

	ForceSendFields []string `json:"-"`
}

func (Create) MarshalJSON added in v0.23.0

func (s Create) MarshalJSON() ([]byte, error)

func (*Create) UnmarshalJSON added in v0.23.0

func (s *Create) UnmarshalJSON(b []byte) error

type CreateDirectoryRequest added in v0.31.0

type CreateDirectoryRequest struct {
	// The absolute path of a directory.
	DirectoryPath string `json:"-" url:"-"`
}

Create a directory

type CreateDirectoryResponse added in v0.34.0

type CreateDirectoryResponse struct {
}

type CreateResponse

type CreateResponse struct {
	// Handle which should subsequently be passed into the AddBlock and Close
	// calls when writing to a file through a stream.
	Handle int64 `json:"handle,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateResponse) MarshalJSON added in v0.23.0

func (s CreateResponse) MarshalJSON() ([]byte, error)

func (*CreateResponse) UnmarshalJSON added in v0.23.0

func (s *CreateResponse) UnmarshalJSON(b []byte) error

type DbfsAPI

type DbfsAPI struct {
	// contains filtered or unexported fields
}

DBFS API makes it simple to interact with various data sources without having to include a users credentials every time to read a file.

func NewDbfs

func NewDbfs(client *client.DatabricksClient) *DbfsAPI

func (*DbfsAPI) AddBlock

func (a *DbfsAPI) AddBlock(ctx context.Context, request AddBlock) error

func (*DbfsAPI) Close

func (a *DbfsAPI) Close(ctx context.Context, request Close) error

func (*DbfsAPI) CloseByHandle

func (a *DbfsAPI) CloseByHandle(ctx context.Context, handle int64) error

Close the stream.

Closes the stream specified by the input handle. If the handle does not exist, this call throws an exception with “RESOURCE_DOES_NOT_EXIST“.

func (*DbfsAPI) Create

func (a *DbfsAPI) Create(ctx context.Context, request Create) (*CreateResponse, error)

func (*DbfsAPI) Delete

func (a *DbfsAPI) Delete(ctx context.Context, request Delete) error

func (*DbfsAPI) GetStatus

func (a *DbfsAPI) GetStatus(ctx context.Context, request GetStatusRequest) (*FileInfo, error)

func (*DbfsAPI) GetStatusByPath

func (a *DbfsAPI) GetStatusByPath(ctx context.Context, path string) (*FileInfo, error)

Get the information of a file or directory.

Gets the file information for a file or directory. If the file or directory does not exist, this call throws an exception with `RESOURCE_DOES_NOT_EXIST`.

func (*DbfsAPI) List added in v0.24.0

List directory contents or file details.

List the contents of a directory, or details of the file. If the file or directory does not exist, this call throws an exception with `RESOURCE_DOES_NOT_EXIST`.

When calling list on a large directory, the list operation will time out after approximately 60 seconds. We strongly recommend using list only on directories containing less than 10K files and discourage using the DBFS REST API for operations that list more than 10K files. Instead, we recommend that you perform such operations in the context of a cluster, using the [File system utility (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs), which provides the same functionality without timing out.

This method is generated by Databricks SDK Code Generator.

func (*DbfsAPI) ListAll

func (a *DbfsAPI) ListAll(ctx context.Context, request ListDbfsRequest) ([]FileInfo, error)

List directory contents or file details.

List the contents of a directory, or details of the file. If the file or directory does not exist, this call throws an exception with `RESOURCE_DOES_NOT_EXIST`.

When calling list on a large directory, the list operation will time out after approximately 60 seconds. We strongly recommend using list only on directories containing less than 10K files and discourage using the DBFS REST API for operations that list more than 10K files. Instead, we recommend that you perform such operations in the context of a cluster, using the [File system utility (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs), which provides the same functionality without timing out.

This method is generated by Databricks SDK Code Generator.

func (*DbfsAPI) ListByPath

func (a *DbfsAPI) ListByPath(ctx context.Context, path string) (*ListStatusResponse, error)

List directory contents or file details.

List the contents of a directory, or details of the file. If the file or directory does not exist, this call throws an exception with `RESOURCE_DOES_NOT_EXIST`.

When calling list on a large directory, the list operation will time out after approximately 60 seconds. We strongly recommend using list only on directories containing less than 10K files and discourage using the DBFS REST API for operations that list more than 10K files. Instead, we recommend that you perform such operations in the context of a cluster, using the [File system utility (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs), which provides the same functionality without timing out.

func (*DbfsAPI) Mkdirs

func (a *DbfsAPI) Mkdirs(ctx context.Context, request MkDirs) error

func (*DbfsAPI) MkdirsByPath

func (a *DbfsAPI) MkdirsByPath(ctx context.Context, path string) error

Create a directory.

Creates the given directory and necessary parent directories if they do not exist. If a file (not a directory) exists at any prefix of the input path, this call throws an exception with `RESOURCE_ALREADY_EXISTS`. **Note**: If this operation fails, it might have succeeded in creating some of the necessary parent directories.

func (*DbfsAPI) Move

func (a *DbfsAPI) Move(ctx context.Context, request Move) error

func (*DbfsAPI) Open

func (a *DbfsAPI) Open(ctx context.Context, path string, mode FileMode) (Handle, error)

Open opens a remote DBFS file for reading or writing. The returned object implements relevant io interfaces for convenient integration with other code that reads or writes bytes.

The io.WriterTo interface is provided and maximizes throughput for bulk reads by reading data with the DBFS maximum read chunk size of 1MB. Similarly, the io.ReaderFrom interface is provided for bulk writing.

A file opened for writing must always be closed.

func (*DbfsAPI) Put

func (a *DbfsAPI) Put(ctx context.Context, request Put) error

func (*DbfsAPI) Read

func (a *DbfsAPI) Read(ctx context.Context, request ReadDbfsRequest) (*ReadResponse, error)

func (*DbfsAPI) ReadFile

func (a *DbfsAPI) ReadFile(ctx context.Context, name string) ([]byte, error)

ReadFile is identical to os.ReadFile but for DBFS.

func (DbfsAPI) RecursiveList

func (a DbfsAPI) RecursiveList(ctx context.Context, path string) ([]FileInfo, error)

RecursiveList traverses the DBFS tree and returns all non-directory objects under the path

func (*DbfsAPI) WriteFile

func (a *DbfsAPI) WriteFile(ctx context.Context, name string, data []byte) error

WriteFile is identical to os.WriteFile but for DBFS.

type DbfsInterface added in v0.29.0

type DbfsInterface interface {

	// Append data block.
	//
	// Appends a block of data to the stream specified by the input handle. If the
	// handle does not exist, this call will throw an exception with
	// “RESOURCE_DOES_NOT_EXIST“.
	//
	// If the block of data exceeds 1 MB, this call will throw an exception with
	// “MAX_BLOCK_SIZE_EXCEEDED“.
	AddBlock(ctx context.Context, request AddBlock) error

	// Close the stream.
	//
	// Closes the stream specified by the input handle. If the handle does not
	// exist, this call throws an exception with “RESOURCE_DOES_NOT_EXIST“.
	Close(ctx context.Context, request Close) error

	// Close the stream.
	//
	// Closes the stream specified by the input handle. If the handle does not
	// exist, this call throws an exception with “RESOURCE_DOES_NOT_EXIST“.
	CloseByHandle(ctx context.Context, handle int64) error

	// Open a stream.
	//
	// Opens a stream to write to a file and returns a handle to this stream. There
	// is a 10 minute idle timeout on this handle. If a file or directory already
	// exists on the given path and __overwrite__ is set to false, this call will
	// throw an exception with “RESOURCE_ALREADY_EXISTS“.
	//
	// A typical workflow for file upload would be:
	//
	// 1. Issue a “create“ call and get a handle. 2. Issue one or more
	// “add-block“ calls with the handle you have. 3. Issue a “close“ call with
	// the handle you have.
	Create(ctx context.Context, request Create) (*CreateResponse, error)

	// Delete a file/directory.
	//
	// Delete the file or directory (optionally recursively delete all files in the
	// directory). This call throws an exception with `IO_ERROR` if the path is a
	// non-empty directory and `recursive` is set to `false` or on other similar
	// errors.
	//
	// When you delete a large number of files, the delete operation is done in
	// increments. The call returns a response after approximately 45 seconds with
	// an error message (503 Service Unavailable) asking you to re-invoke the delete
	// operation until the directory structure is fully deleted.
	//
	// For operations that delete more than 10K files, we discourage using the DBFS
	// REST API, but advise you to perform such operations in the context of a
	// cluster, using the [File system utility
	// (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs). `dbutils.fs`
	// covers the functional scope of the DBFS REST API, but from notebooks. Running
	// such operations using notebooks provides better control and manageability,
	// such as selective deletes, and the possibility to automate periodic delete
	// jobs.
	Delete(ctx context.Context, request Delete) error

	// Get the information of a file or directory.
	//
	// Gets the file information for a file or directory. If the file or directory
	// does not exist, this call throws an exception with `RESOURCE_DOES_NOT_EXIST`.
	GetStatus(ctx context.Context, request GetStatusRequest) (*FileInfo, error)

	// Get the information of a file or directory.
	//
	// Gets the file information for a file or directory. If the file or directory
	// does not exist, this call throws an exception with `RESOURCE_DOES_NOT_EXIST`.
	GetStatusByPath(ctx context.Context, path string) (*FileInfo, error)

	// List directory contents or file details.
	//
	// List the contents of a directory, or details of the file. If the file or
	// directory does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`.
	//
	// When calling list on a large directory, the list operation will time out
	// after approximately 60 seconds. We strongly recommend using list only on
	// directories containing less than 10K files and discourage using the DBFS REST
	// API for operations that list more than 10K files. Instead, we recommend that
	// you perform such operations in the context of a cluster, using the [File
	// system utility (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs),
	// which provides the same functionality without timing out.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListDbfsRequest) listing.Iterator[FileInfo]

	// List directory contents or file details.
	//
	// List the contents of a directory, or details of the file. If the file or
	// directory does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`.
	//
	// When calling list on a large directory, the list operation will time out
	// after approximately 60 seconds. We strongly recommend using list only on
	// directories containing less than 10K files and discourage using the DBFS REST
	// API for operations that list more than 10K files. Instead, we recommend that
	// you perform such operations in the context of a cluster, using the [File
	// system utility (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs),
	// which provides the same functionality without timing out.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListDbfsRequest) ([]FileInfo, error)

	// List directory contents or file details.
	//
	// List the contents of a directory, or details of the file. If the file or
	// directory does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`.
	//
	// When calling list on a large directory, the list operation will time out
	// after approximately 60 seconds. We strongly recommend using list only on
	// directories containing less than 10K files and discourage using the DBFS REST
	// API for operations that list more than 10K files. Instead, we recommend that
	// you perform such operations in the context of a cluster, using the [File
	// system utility (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs),
	// which provides the same functionality without timing out.
	ListByPath(ctx context.Context, path string) (*ListStatusResponse, error)

	// Create a directory.
	//
	// Creates the given directory and necessary parent directories if they do not
	// exist. If a file (not a directory) exists at any prefix of the input path,
	// this call throws an exception with `RESOURCE_ALREADY_EXISTS`. **Note**: If
	// this operation fails, it might have succeeded in creating some of the
	// necessary parent directories.
	Mkdirs(ctx context.Context, request MkDirs) error

	// Create a directory.
	//
	// Creates the given directory and necessary parent directories if they do not
	// exist. If a file (not a directory) exists at any prefix of the input path,
	// this call throws an exception with `RESOURCE_ALREADY_EXISTS`. **Note**: If
	// this operation fails, it might have succeeded in creating some of the
	// necessary parent directories.
	MkdirsByPath(ctx context.Context, path string) error

	// Move a file.
	//
	// Moves a file from one location to another location within DBFS. If the source
	// file does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`. If a file already exists in the destination path,
	// this call throws an exception with `RESOURCE_ALREADY_EXISTS`. If the given
	// source path is a directory, this call always recursively moves all files.
	Move(ctx context.Context, request Move) error

	// Upload a file.
	//
	// Uploads a file through the use of multipart form post. It is mainly used for
	// streaming uploads, but can also be used as a convenient single call for data
	// upload.
	//
	// Alternatively you can pass contents as base64 string.
	//
	// The amount of data that can be passed (when not streaming) using the
	// __contents__ parameter is limited to 1 MB. `MAX_BLOCK_SIZE_EXCEEDED` will be
	// thrown if this limit is exceeded.
	//
	// If you want to upload large files, use the streaming upload. For details, see
	// :method:dbfs/create, :method:dbfs/addBlock, :method:dbfs/close.
	Put(ctx context.Context, request Put) error

	// Get the contents of a file.
	//
	// Returns the contents of a file. If the file does not exist, this call throws
	// an exception with `RESOURCE_DOES_NOT_EXIST`. If the path is a directory, the
	// read length is negative, or if the offset is negative, this call throws an
	// exception with `INVALID_PARAMETER_VALUE`. If the read length exceeds 1 MB,
	// this call throws an exception with `MAX_READ_SIZE_EXCEEDED`.
	//
	// If `offset + length` exceeds the number of bytes in a file, it reads the
	// contents until the end of file.
	Read(ctx context.Context, request ReadDbfsRequest) (*ReadResponse, error)
	// contains filtered or unexported methods
}

type DbfsService

type DbfsService interface {

	// Append data block.
	//
	// Appends a block of data to the stream specified by the input handle. If
	// the handle does not exist, this call will throw an exception with
	// “RESOURCE_DOES_NOT_EXIST“.
	//
	// If the block of data exceeds 1 MB, this call will throw an exception with
	// “MAX_BLOCK_SIZE_EXCEEDED“.
	AddBlock(ctx context.Context, request AddBlock) error

	// Close the stream.
	//
	// Closes the stream specified by the input handle. If the handle does not
	// exist, this call throws an exception with “RESOURCE_DOES_NOT_EXIST“.
	Close(ctx context.Context, request Close) error

	// Open a stream.
	//
	// Opens a stream to write to a file and returns a handle to this stream.
	// There is a 10 minute idle timeout on this handle. If a file or directory
	// already exists on the given path and __overwrite__ is set to false, this
	// call will throw an exception with “RESOURCE_ALREADY_EXISTS“.
	//
	// A typical workflow for file upload would be:
	//
	// 1. Issue a “create“ call and get a handle. 2. Issue one or more
	// “add-block“ calls with the handle you have. 3. Issue a “close“ call
	// with the handle you have.
	Create(ctx context.Context, request Create) (*CreateResponse, error)

	// Delete a file/directory.
	//
	// Delete the file or directory (optionally recursively delete all files in
	// the directory). This call throws an exception with `IO_ERROR` if the path
	// is a non-empty directory and `recursive` is set to `false` or on other
	// similar errors.
	//
	// When you delete a large number of files, the delete operation is done in
	// increments. The call returns a response after approximately 45 seconds
	// with an error message (503 Service Unavailable) asking you to re-invoke
	// the delete operation until the directory structure is fully deleted.
	//
	// For operations that delete more than 10K files, we discourage using the
	// DBFS REST API, but advise you to perform such operations in the context
	// of a cluster, using the [File system utility
	// (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs). `dbutils.fs`
	// covers the functional scope of the DBFS REST API, but from notebooks.
	// Running such operations using notebooks provides better control and
	// manageability, such as selective deletes, and the possibility to automate
	// periodic delete jobs.
	Delete(ctx context.Context, request Delete) error

	// Get the information of a file or directory.
	//
	// Gets the file information for a file or directory. If the file or
	// directory does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`.
	GetStatus(ctx context.Context, request GetStatusRequest) (*FileInfo, error)

	// List directory contents or file details.
	//
	// List the contents of a directory, or details of the file. If the file or
	// directory does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`.
	//
	// When calling list on a large directory, the list operation will time out
	// after approximately 60 seconds. We strongly recommend using list only on
	// directories containing less than 10K files and discourage using the DBFS
	// REST API for operations that list more than 10K files. Instead, we
	// recommend that you perform such operations in the context of a cluster,
	// using the [File system utility
	// (dbutils.fs)](/dev-tools/databricks-utils.html#dbutils-fs), which
	// provides the same functionality without timing out.
	//
	// Use ListAll() to get all FileInfo instances
	List(ctx context.Context, request ListDbfsRequest) (*ListStatusResponse, error)

	// Create a directory.
	//
	// Creates the given directory and necessary parent directories if they do
	// not exist. If a file (not a directory) exists at any prefix of the input
	// path, this call throws an exception with `RESOURCE_ALREADY_EXISTS`.
	// **Note**: If this operation fails, it might have succeeded in creating
	// some of the necessary parent directories.
	Mkdirs(ctx context.Context, request MkDirs) error

	// Move a file.
	//
	// Moves a file from one location to another location within DBFS. If the
	// source file does not exist, this call throws an exception with
	// `RESOURCE_DOES_NOT_EXIST`. If a file already exists in the destination
	// path, this call throws an exception with `RESOURCE_ALREADY_EXISTS`. If
	// the given source path is a directory, this call always recursively moves
	// all files.
	Move(ctx context.Context, request Move) error

	// Upload a file.
	//
	// Uploads a file through the use of multipart form post. It is mainly used
	// for streaming uploads, but can also be used as a convenient single call
	// for data upload.
	//
	// Alternatively you can pass contents as base64 string.
	//
	// The amount of data that can be passed (when not streaming) using the
	// __contents__ parameter is limited to 1 MB. `MAX_BLOCK_SIZE_EXCEEDED` will
	// be thrown if this limit is exceeded.
	//
	// If you want to upload large files, use the streaming upload. For details,
	// see :method:dbfs/create, :method:dbfs/addBlock, :method:dbfs/close.
	Put(ctx context.Context, request Put) error

	// Get the contents of a file.
	//
	// Returns the contents of a file. If the file does not exist, this call
	// throws an exception with `RESOURCE_DOES_NOT_EXIST`. If the path is a
	// directory, the read length is negative, or if the offset is negative,
	// this call throws an exception with `INVALID_PARAMETER_VALUE`. If the read
	// length exceeds 1 MB, this call throws an exception with
	// `MAX_READ_SIZE_EXCEEDED`.
	//
	// If `offset + length` exceeds the number of bytes in a file, it reads the
	// contents until the end of file.
	Read(ctx context.Context, request ReadDbfsRequest) (*ReadResponse, error)
}

DBFS API makes it simple to interact with various data sources without having to include a users credentials every time to read a file.

type Delete

type Delete struct {
	// The path of the file or directory to delete. The path should be the
	// absolute DBFS path.
	Path string `json:"path"`
	// Whether or not to recursively delete the directory's contents. Deleting
	// empty directories can be done without providing the recursive flag.
	Recursive bool `json:"recursive,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (Delete) MarshalJSON added in v0.23.0

func (s Delete) MarshalJSON() ([]byte, error)

func (*Delete) UnmarshalJSON added in v0.23.0

func (s *Delete) UnmarshalJSON(b []byte) error

type DeleteDirectoryRequest added in v0.31.0

type DeleteDirectoryRequest struct {
	// The absolute path of a directory.
	DirectoryPath string `json:"-" url:"-"`
}

Delete a directory

type DeleteDirectoryResponse added in v0.34.0

type DeleteDirectoryResponse struct {
}

type DeleteFileRequest added in v0.18.0

type DeleteFileRequest struct {
	// The absolute path of the file.
	FilePath string `json:"-" url:"-"`
}

Delete a file

type DeleteResponse added in v0.34.0

type DeleteResponse struct {
}

type DirectoryEntry added in v0.31.0

type DirectoryEntry struct {
	// The length of the file in bytes. This field is omitted for directories.
	FileSize int64 `json:"file_size,omitempty"`
	// True if the path is a directory.
	IsDirectory bool `json:"is_directory,omitempty"`
	// Last modification time of given file in milliseconds since unix epoch.
	LastModified int64 `json:"last_modified,omitempty"`
	// The name of the file or directory. This is the last component of the
	// path.
	Name string `json:"name,omitempty"`
	// The absolute path of the file or directory.
	Path string `json:"path,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (DirectoryEntry) MarshalJSON added in v0.31.0

func (s DirectoryEntry) MarshalJSON() ([]byte, error)

func (*DirectoryEntry) UnmarshalJSON added in v0.31.0

func (s *DirectoryEntry) UnmarshalJSON(b []byte) error

type DownloadRequest added in v0.18.0

type DownloadRequest struct {
	// The absolute path of the file.
	FilePath string `json:"-" url:"-"`
}

Download a file

type DownloadResponse added in v0.18.0

type DownloadResponse struct {
	ContentLength int64 `json:"-" url:"-" header:"content-length,omitempty"`

	ContentType string `json:"-" url:"-" header:"content-type,omitempty"`

	Contents io.ReadCloser `json:"-"`

	LastModified string `json:"-" url:"-" header:"last-modified,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (DownloadResponse) MarshalJSON added in v0.33.0

func (s DownloadResponse) MarshalJSON() ([]byte, error)

func (*DownloadResponse) UnmarshalJSON added in v0.33.0

func (s *DownloadResponse) UnmarshalJSON(b []byte) error

type FileInfo

type FileInfo struct {
	// The length of the file in bytes. This field is omitted for directories.
	FileSize int64 `json:"file_size,omitempty"`
	// True if the path is a directory.
	IsDir bool `json:"is_dir,omitempty"`
	// Last modification time of given file in milliseconds since epoch.
	ModificationTime int64 `json:"modification_time,omitempty"`
	// The absolute path of the file or directory.
	Path string `json:"path,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (FileInfo) MarshalJSON added in v0.23.0

func (s FileInfo) MarshalJSON() ([]byte, error)

func (*FileInfo) UnmarshalJSON added in v0.23.0

func (s *FileInfo) UnmarshalJSON(b []byte) error

type FileMode

type FileMode int

FileMode conveys user intent when opening a file.

const (
	// Exactly one of FileModeRead or FileModeWrite must be specified.
	FileModeRead FileMode = 1 << iota
	FileModeWrite
	FileModeOverwrite
)

type FilesAPI added in v0.10.0

type FilesAPI struct {
	// contains filtered or unexported fields
}

The Files API is a standard HTTP API that allows you to read, write, list, and delete files and directories by referring to their URI. The API makes working with file content as raw bytes easier and more efficient.

The API supports Unity Catalog volumes, where files and directories to operate on are specified using their volume URI path, which follows the format /Volumes/&lt;catalog_name&gt;/&lt;schema_name&gt;/&lt;volume_name&gt;/&lt;path_to_file&gt;.

The Files API has two distinct endpoints, one for working with files (`/fs/files`) and another one for working with directories (`/fs/directories`). Both endpoints, use the standard HTTP methods GET, HEAD, PUT, and DELETE to manage files and directories specified using their URI path. The path is always absolute.

func NewFiles added in v0.10.0

func NewFiles(client *client.DatabricksClient) *FilesAPI

func (*FilesAPI) CreateDirectory added in v0.31.0

func (a *FilesAPI) CreateDirectory(ctx context.Context, request CreateDirectoryRequest) error

func (*FilesAPI) Delete added in v0.10.0

func (a *FilesAPI) Delete(ctx context.Context, request DeleteFileRequest) error

func (*FilesAPI) DeleteByFilePath added in v0.18.0

func (a *FilesAPI) DeleteByFilePath(ctx context.Context, filePath string) error

Delete a file.

Deletes a file. If the request is successful, there is no response body.

func (*FilesAPI) DeleteDirectory added in v0.31.0

func (a *FilesAPI) DeleteDirectory(ctx context.Context, request DeleteDirectoryRequest) error

func (*FilesAPI) DeleteDirectoryByDirectoryPath added in v0.31.0

func (a *FilesAPI) DeleteDirectoryByDirectoryPath(ctx context.Context, directoryPath string) error

Delete a directory.

Deletes an empty directory.

To delete a non-empty directory, first delete all of its contents. This can be done by listing the directory contents and deleting each file and subdirectory recursively.

func (*FilesAPI) Download added in v0.10.0

func (a *FilesAPI) Download(ctx context.Context, request DownloadRequest) (*DownloadResponse, error)

func (*FilesAPI) DownloadByFilePath added in v0.18.0

func (a *FilesAPI) DownloadByFilePath(ctx context.Context, filePath string) (*DownloadResponse, error)

Download a file.

Downloads a file. The file contents are the response body. This is a standard HTTP file download, not a JSON RPC. It supports the Range and If-Unmodified-Since HTTP headers.

func (*FilesAPI) GetDirectoryMetadata added in v0.32.0

func (a *FilesAPI) GetDirectoryMetadata(ctx context.Context, request GetDirectoryMetadataRequest) error

func (*FilesAPI) GetDirectoryMetadataByDirectoryPath added in v0.32.0

func (a *FilesAPI) GetDirectoryMetadataByDirectoryPath(ctx context.Context, directoryPath string) error

Get directory metadata.

Get the metadata of a directory. The response HTTP headers contain the metadata. There is no response body.

This method is useful to check if a directory exists and the caller has access to it.

If you wish to ensure the directory exists, you can instead use `PUT`, which will create the directory if it does not exist, and is idempotent (it will succeed if the directory already exists).

func (*FilesAPI) GetMetadata added in v0.32.0

func (a *FilesAPI) GetMetadata(ctx context.Context, request GetMetadataRequest) (*GetMetadataResponse, error)

func (*FilesAPI) GetMetadataByFilePath added in v0.32.0

func (a *FilesAPI) GetMetadataByFilePath(ctx context.Context, filePath string) (*GetMetadataResponse, error)

Get file metadata.

Get the metadata of a file. The response HTTP headers contain the metadata. There is no response body.

func (*FilesAPI) ListDirectoryContents added in v0.31.0

func (a *FilesAPI) ListDirectoryContents(ctx context.Context, request ListDirectoryContentsRequest) listing.Iterator[DirectoryEntry]

List directory contents.

Returns the contents of a directory. If there is no directory at the specified path, the API returns a HTTP 404 error.

This method is generated by Databricks SDK Code Generator.

func (*FilesAPI) ListDirectoryContentsAll added in v0.31.0

func (a *FilesAPI) ListDirectoryContentsAll(ctx context.Context, request ListDirectoryContentsRequest) ([]DirectoryEntry, error)

List directory contents.

Returns the contents of a directory. If there is no directory at the specified path, the API returns a HTTP 404 error.

This method is generated by Databricks SDK Code Generator.

func (*FilesAPI) ListDirectoryContentsByDirectoryPath added in v0.31.0

func (a *FilesAPI) ListDirectoryContentsByDirectoryPath(ctx context.Context, directoryPath string) (*ListDirectoryResponse, error)

List directory contents.

Returns the contents of a directory. If there is no directory at the specified path, the API returns a HTTP 404 error.

func (*FilesAPI) Upload added in v0.10.0

func (a *FilesAPI) Upload(ctx context.Context, request UploadRequest) error

type FilesInterface added in v0.29.0

type FilesInterface interface {

	// Create a directory.
	//
	// Creates an empty directory. If necessary, also creates any parent directories
	// of the new, empty directory (like the shell command `mkdir -p`). If called on
	// an existing directory, returns a success response; this method is idempotent
	// (it will succeed if the directory already exists).
	CreateDirectory(ctx context.Context, request CreateDirectoryRequest) error

	// Delete a file.
	//
	// Deletes a file. If the request is successful, there is no response body.
	Delete(ctx context.Context, request DeleteFileRequest) error

	// Delete a file.
	//
	// Deletes a file. If the request is successful, there is no response body.
	DeleteByFilePath(ctx context.Context, filePath string) error

	// Delete a directory.
	//
	// Deletes an empty directory.
	//
	// To delete a non-empty directory, first delete all of its contents. This can
	// be done by listing the directory contents and deleting each file and
	// subdirectory recursively.
	DeleteDirectory(ctx context.Context, request DeleteDirectoryRequest) error

	// Delete a directory.
	//
	// Deletes an empty directory.
	//
	// To delete a non-empty directory, first delete all of its contents. This can
	// be done by listing the directory contents and deleting each file and
	// subdirectory recursively.
	DeleteDirectoryByDirectoryPath(ctx context.Context, directoryPath string) error

	// Download a file.
	//
	// Downloads a file. The file contents are the response body. This is a standard
	// HTTP file download, not a JSON RPC. It supports the Range and
	// If-Unmodified-Since HTTP headers.
	Download(ctx context.Context, request DownloadRequest) (*DownloadResponse, error)

	// Download a file.
	//
	// Downloads a file. The file contents are the response body. This is a standard
	// HTTP file download, not a JSON RPC. It supports the Range and
	// If-Unmodified-Since HTTP headers.
	DownloadByFilePath(ctx context.Context, filePath string) (*DownloadResponse, error)

	// Get directory metadata.
	//
	// Get the metadata of a directory. The response HTTP headers contain the
	// metadata. There is no response body.
	//
	// This method is useful to check if a directory exists and the caller has
	// access to it.
	//
	// If you wish to ensure the directory exists, you can instead use `PUT`, which
	// will create the directory if it does not exist, and is idempotent (it will
	// succeed if the directory already exists).
	GetDirectoryMetadata(ctx context.Context, request GetDirectoryMetadataRequest) error

	// Get directory metadata.
	//
	// Get the metadata of a directory. The response HTTP headers contain the
	// metadata. There is no response body.
	//
	// This method is useful to check if a directory exists and the caller has
	// access to it.
	//
	// If you wish to ensure the directory exists, you can instead use `PUT`, which
	// will create the directory if it does not exist, and is idempotent (it will
	// succeed if the directory already exists).
	GetDirectoryMetadataByDirectoryPath(ctx context.Context, directoryPath string) error

	// Get file metadata.
	//
	// Get the metadata of a file. The response HTTP headers contain the metadata.
	// There is no response body.
	GetMetadata(ctx context.Context, request GetMetadataRequest) (*GetMetadataResponse, error)

	// Get file metadata.
	//
	// Get the metadata of a file. The response HTTP headers contain the metadata.
	// There is no response body.
	GetMetadataByFilePath(ctx context.Context, filePath string) (*GetMetadataResponse, error)

	// List directory contents.
	//
	// Returns the contents of a directory. If there is no directory at the
	// specified path, the API returns a HTTP 404 error.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListDirectoryContents(ctx context.Context, request ListDirectoryContentsRequest) listing.Iterator[DirectoryEntry]

	// List directory contents.
	//
	// Returns the contents of a directory. If there is no directory at the
	// specified path, the API returns a HTTP 404 error.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListDirectoryContentsAll(ctx context.Context, request ListDirectoryContentsRequest) ([]DirectoryEntry, error)

	// List directory contents.
	//
	// Returns the contents of a directory. If there is no directory at the
	// specified path, the API returns a HTTP 404 error.
	ListDirectoryContentsByDirectoryPath(ctx context.Context, directoryPath string) (*ListDirectoryResponse, error)

	// Upload a file.
	//
	// Uploads a file of up to 5 GiB. The file contents should be sent as the
	// request body as raw bytes (an octet stream); do not encode or otherwise
	// modify the bytes before sending. The contents of the resulting file will be
	// exactly the bytes sent in the request body. If the request is successful,
	// there is no response body.
	Upload(ctx context.Context, request UploadRequest) error
}

type FilesService added in v0.10.0

type FilesService interface {

	// Create a directory.
	//
	// Creates an empty directory. If necessary, also creates any parent
	// directories of the new, empty directory (like the shell command `mkdir
	// -p`). If called on an existing directory, returns a success response;
	// this method is idempotent (it will succeed if the directory already
	// exists).
	CreateDirectory(ctx context.Context, request CreateDirectoryRequest) error

	// Delete a file.
	//
	// Deletes a file. If the request is successful, there is no response body.
	Delete(ctx context.Context, request DeleteFileRequest) error

	// Delete a directory.
	//
	// Deletes an empty directory.
	//
	// To delete a non-empty directory, first delete all of its contents. This
	// can be done by listing the directory contents and deleting each file and
	// subdirectory recursively.
	DeleteDirectory(ctx context.Context, request DeleteDirectoryRequest) error

	// Download a file.
	//
	// Downloads a file. The file contents are the response body. This is a
	// standard HTTP file download, not a JSON RPC. It supports the Range and
	// If-Unmodified-Since HTTP headers.
	Download(ctx context.Context, request DownloadRequest) (*DownloadResponse, error)

	// Get directory metadata.
	//
	// Get the metadata of a directory. The response HTTP headers contain the
	// metadata. There is no response body.
	//
	// This method is useful to check if a directory exists and the caller has
	// access to it.
	//
	// If you wish to ensure the directory exists, you can instead use `PUT`,
	// which will create the directory if it does not exist, and is idempotent
	// (it will succeed if the directory already exists).
	GetDirectoryMetadata(ctx context.Context, request GetDirectoryMetadataRequest) error

	// Get file metadata.
	//
	// Get the metadata of a file. The response HTTP headers contain the
	// metadata. There is no response body.
	GetMetadata(ctx context.Context, request GetMetadataRequest) (*GetMetadataResponse, error)

	// List directory contents.
	//
	// Returns the contents of a directory. If there is no directory at the
	// specified path, the API returns a HTTP 404 error.
	//
	// Use ListDirectoryContentsAll() to get all DirectoryEntry instances, which will iterate over every result page.
	ListDirectoryContents(ctx context.Context, request ListDirectoryContentsRequest) (*ListDirectoryResponse, error)

	// Upload a file.
	//
	// Uploads a file of up to 5 GiB. The file contents should be sent as the
	// request body as raw bytes (an octet stream); do not encode or otherwise
	// modify the bytes before sending. The contents of the resulting file will
	// be exactly the bytes sent in the request body. If the request is
	// successful, there is no response body.
	Upload(ctx context.Context, request UploadRequest) error
}

The Files API is a standard HTTP API that allows you to read, write, list, and delete files and directories by referring to their URI. The API makes working with file content as raw bytes easier and more efficient.

The API supports Unity Catalog volumes, where files and directories to operate on are specified using their volume URI path, which follows the format /Volumes/&lt;catalog_name&gt;/&lt;schema_name&gt;/&lt;volume_name&gt;/&lt;path_to_file&gt;.

The Files API has two distinct endpoints, one for working with files (`/fs/files`) and another one for working with directories (`/fs/directories`). Both endpoints, use the standard HTTP methods GET, HEAD, PUT, and DELETE to manage files and directories specified using their URI path. The path is always absolute.

type GetDirectoryMetadataRequest added in v0.32.0

type GetDirectoryMetadataRequest struct {
	// The absolute path of a directory.
	DirectoryPath string `json:"-" url:"-"`
}

Get directory metadata

type GetDirectoryMetadataResponse added in v0.34.0

type GetDirectoryMetadataResponse struct {
}

type GetMetadataRequest added in v0.32.0

type GetMetadataRequest struct {
	// The absolute path of the file.
	FilePath string `json:"-" url:"-"`
}

Get file metadata

type GetMetadataResponse added in v0.32.0

type GetMetadataResponse struct {
	ContentLength int64 `json:"-" url:"-" header:"content-length,omitempty"`

	ContentType string `json:"-" url:"-" header:"content-type,omitempty"`

	LastModified string `json:"-" url:"-" header:"last-modified,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (GetMetadataResponse) MarshalJSON added in v0.32.0

func (s GetMetadataResponse) MarshalJSON() ([]byte, error)

func (*GetMetadataResponse) UnmarshalJSON added in v0.32.0

func (s *GetMetadataResponse) UnmarshalJSON(b []byte) error

type GetStatusRequest

type GetStatusRequest struct {
	// The path of the file or directory. The path should be the absolute DBFS
	// path.
	Path string `json:"-" url:"path"`
}

Get the information of a file or directory

type Handle

type Handle interface {
	io.ReadWriteCloser
	io.WriterTo
	io.ReaderFrom
}

Handle defines the interface of the object returned by DbfsAPI.Open.

type ListDbfsRequest

type ListDbfsRequest struct {
	// The path of the file or directory. The path should be the absolute DBFS
	// path.
	Path string `json:"-" url:"path"`
}

List directory contents or file details

type ListDirectoryContentsRequest added in v0.31.0

type ListDirectoryContentsRequest struct {
	// The absolute path of a directory.
	DirectoryPath string `json:"-" url:"-"`
	// The maximum number of directory entries to return. The response may
	// contain fewer entries. If the response contains a `next_page_token`,
	// there may be more entries, even if fewer than `page_size` entries are in
	// the response.
	//
	// We recommend not to set this value unless you are intentionally listing
	// less than the complete directory contents.
	//
	// If unspecified, at most 1000 directory entries will be returned. The
	// maximum value is 1000. Values above 1000 will be coerced to 1000.
	PageSize int64 `json:"-" url:"page_size,omitempty"`
	// An opaque page token which was the `next_page_token` in the response of
	// the previous request to list the contents of this directory. Provide this
	// token to retrieve the next page of directory entries. When providing a
	// `page_token`, all other parameters provided to the request must match the
	// previous request. To list all of the entries in a directory, it is
	// necessary to continue requesting pages of entries until the response
	// contains no `next_page_token`. Note that the number of entries returned
	// must not be used to determine when the listing is complete.
	PageToken string `json:"-" url:"page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

List directory contents

func (ListDirectoryContentsRequest) MarshalJSON added in v0.31.0

func (s ListDirectoryContentsRequest) MarshalJSON() ([]byte, error)

func (*ListDirectoryContentsRequest) UnmarshalJSON added in v0.31.0

func (s *ListDirectoryContentsRequest) UnmarshalJSON(b []byte) error

type ListDirectoryResponse added in v0.31.0

type ListDirectoryResponse struct {
	// Array of DirectoryEntry.
	Contents []DirectoryEntry `json:"contents,omitempty"`
	// A token, which can be sent as `page_token` to retrieve the next page.
	NextPageToken string `json:"next_page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListDirectoryResponse) MarshalJSON added in v0.31.0

func (s ListDirectoryResponse) MarshalJSON() ([]byte, error)

func (*ListDirectoryResponse) UnmarshalJSON added in v0.31.0

func (s *ListDirectoryResponse) UnmarshalJSON(b []byte) error

type ListStatusResponse

type ListStatusResponse struct {
	// A list of FileInfo's that describe contents of directory or file. See
	// example above.
	Files []FileInfo `json:"files,omitempty"`
}

type MkDirs

type MkDirs struct {
	// The path of the new directory. The path should be the absolute DBFS path.
	Path string `json:"path"`
}

type MkDirsResponse added in v0.34.0

type MkDirsResponse struct {
}

type Move

type Move struct {
	// The destination path of the file or directory. The path should be the
	// absolute DBFS path.
	DestinationPath string `json:"destination_path"`
	// The source path of the file or directory. The path should be the absolute
	// DBFS path.
	SourcePath string `json:"source_path"`
}

type MoveResponse added in v0.34.0

type MoveResponse struct {
}

type Put

type Put struct {
	// This parameter might be absent, and instead a posted file will be used.
	Contents string `json:"contents,omitempty"`
	// The flag that specifies whether to overwrite existing file/files.
	Overwrite bool `json:"overwrite,omitempty"`
	// The path of the new file. The path should be the absolute DBFS path.
	Path string `json:"path"`

	ForceSendFields []string `json:"-"`
}

func (Put) MarshalJSON added in v0.23.0

func (s Put) MarshalJSON() ([]byte, error)

func (*Put) UnmarshalJSON added in v0.23.0

func (s *Put) UnmarshalJSON(b []byte) error

type PutResponse added in v0.34.0

type PutResponse struct {
}

type ReadDbfsRequest

type ReadDbfsRequest struct {
	// The number of bytes to read starting from the offset. This has a limit of
	// 1 MB, and a default value of 0.5 MB.
	Length int64 `json:"-" url:"length,omitempty"`
	// The offset to read from in bytes.
	Offset int64 `json:"-" url:"offset,omitempty"`
	// The path of the file to read. The path should be the absolute DBFS path.
	Path string `json:"-" url:"path"`

	ForceSendFields []string `json:"-"`
}

Get the contents of a file

func (ReadDbfsRequest) MarshalJSON added in v0.23.0

func (s ReadDbfsRequest) MarshalJSON() ([]byte, error)

func (*ReadDbfsRequest) UnmarshalJSON added in v0.23.0

func (s *ReadDbfsRequest) UnmarshalJSON(b []byte) error

type ReadResponse

type ReadResponse struct {
	// The number of bytes read (could be less than “length“ if we hit end of
	// file). This refers to number of bytes read in unencoded version (response
	// data is base64-encoded).
	BytesRead int64 `json:"bytes_read,omitempty"`
	// The base64-encoded contents of the file read.
	Data string `json:"data,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ReadResponse) MarshalJSON added in v0.23.0

func (s ReadResponse) MarshalJSON() ([]byte, error)

func (*ReadResponse) UnmarshalJSON added in v0.23.0

func (s *ReadResponse) UnmarshalJSON(b []byte) error

type UploadRequest added in v0.18.0

type UploadRequest struct {
	Contents io.ReadCloser `json:"-"`
	// The absolute path of the file.
	FilePath string `json:"-" url:"-"`
	// If true, an existing file will be overwritten.
	Overwrite bool `json:"-" url:"overwrite,omitempty"`

	ForceSendFields []string `json:"-"`
}

Upload a file

func (UploadRequest) MarshalJSON added in v0.23.0

func (s UploadRequest) MarshalJSON() ([]byte, error)

func (*UploadRequest) UnmarshalJSON added in v0.23.0

func (s *UploadRequest) UnmarshalJSON(b []byte) error

type UploadResponse added in v0.34.0

type UploadResponse struct {
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL