Documentation ¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func GetVideoExplanationRequests ¶
func GetVideoExplanationRequests(c *openai.Client, opts VideoExplanationOpts) ([]openai.ChatCompletionRequest, error)
GetVideoExplanationStreamRequests retrieves the transcription of a given URL, then splits the transcript into smaller segments according to the provided opts.ChunkSize.
It returns a slice of openai.ChatCompletionRequest objects that can be used to interactively stream responses from the API.
func NewChatCompletionRequest ¶
func NewChatCompletionRequest(chunk TextBucket, opts VideoExplanationOpts) *openai.ChatCompletionRequest
Given a single TextBucket, this function constructs an individual request object for the API If no model is explicitly set via opts.Model, it defaults to 'mistral-7b-instruct'.
Types ¶
type TextBucket ¶
type TextBucket struct { // Start of the text. This should be a relative time only. I.e. StartTime=00:00:00 Start time.Time // End of the text. This should be a relative time only. I.e. EndTime=00:00:05 End time.Time // The text of the bucket, which happens between start time and end time. Text string }
TextBucket is an atomic unit a piece of text being explained
type VideoExplanationOpts ¶
type VideoExplanationOpts struct { // The video URL to the content Url string // Name of the model used as an LLM Model *string // Whether to stream the request Stream bool // The default prompt to be appended before each TextBucket Prompt string // Time interval of the chunked text. If the text is an hour long and the duration is set to 30*time.Minute, there will be 2 buckets internally. ChunkSize time.Duration }
VideoExplanationOpts encapsulates all options to use for video explanation.
Click to show internal directories.
Click to hide internal directories.