vx

package module
v0.0.0-...-21396c8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 22, 2024 License: MIT Imports: 3 Imported by: 0

README

vx

Explain videos in plain text with LLMs.

Install CLI

> go install github.com/hum/vx/cmd/vx@latest
> vx --help

Install package

> go get github.com/hum/vx

Usage

As a CLI

The CLI allows specifying an alternative API to use (only tested with Perplexity), as well as a custom prompt or a model. For all options use:

> vx --help

Vx supports both streamed and non-streamed responses on the CLI. Use the --stream flag to stream to STDOUT.

> vx --url "url" --stream
As a package
package main

import (
	"context"
	"fmt"
	"time"

	"github.com/hum/vx"
	"github.com/sashabaranov/go-openai"
)

func main() {
	oapi := openai.NewClient("token")

	r, err := vx.GetVideoExplanationRequests(oapi, vx.VideoExplanationOpts{
		Url:       "url",
		Prompt:    "Give me 5 bullet points from this text: ",
		ChunkSize: 5 * time.Minute,
	})
	if err != nil {
		panic(err)
	}

	for _, request := range r {
		response, err := oapi.CreateChatCompletion(context.TODO(), request)
		if err != nil {
			panic(err)
		}
		fmt.Println(response.Choices[0].Message.Content)
	}
}

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func GetVideoExplanationRequests

func GetVideoExplanationRequests(c *openai.Client, opts VideoExplanationOpts) ([]openai.ChatCompletionRequest, error)

GetVideoExplanationStreamRequests retrieves the transcription of a given URL, then splits the transcript into smaller segments according to the provided opts.ChunkSize.

It returns a slice of openai.ChatCompletionRequest objects that can be used to interactively stream responses from the API.

func NewChatCompletionRequest

func NewChatCompletionRequest(chunk TextBucket, opts VideoExplanationOpts) *openai.ChatCompletionRequest

Given a single TextBucket, this function constructs an individual request object for the API If no model is explicitly set via opts.Model, it defaults to 'mistral-7b-instruct'.

Types

type TextBucket

type TextBucket struct {
	// Start of the text. This should be a relative time only. I.e. StartTime=00:00:00
	Start time.Time
	// End of the text. This should be a relative time only. I.e. EndTime=00:00:05
	End time.Time
	// The text of the bucket, which happens between start time and end time.
	Text string
}

TextBucket is an atomic unit a piece of text being explained

type VideoExplanationOpts

type VideoExplanationOpts struct {
	// The video URL to the content
	Url string
	// Name of the model used as an LLM
	Model *string
	// Whether to stream the request
	Stream bool
	// The default prompt to be appended before each TextBucket
	Prompt string
	// Time interval of the chunked text. If the text is an hour long and the duration is set to 30*time.Minute, there will be 2 buckets internally.
	ChunkSize time.Duration
}

VideoExplanationOpts encapsulates all options to use for video explanation.

Directories

Path Synopsis
cmd
vx

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL