bwmf

package
v0.0.0-...-28de2d1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 26, 2015 License: Apache-2.0 Imports: 16 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func Dump

func Dump(conf *Config) ([]byte, error)

func GetFsClient

func GetFsClient(config *Config) (fs.Client, error)

func LoadMatrixShard

func LoadMatrixShard(client fs.Client, path string) (*pb.MatrixShard, error)

func SaveMatrixShard

func SaveMatrixShard(client fs.Client, shard *pb.MatrixShard, path string) error

Types

type BWMFTaskBuilder

type BWMFTaskBuilder struct {
	NumOfTasks uint64
	ConfBytes  []byte
}

func (BWMFTaskBuilder) GetTask

func (tb BWMFTaskBuilder) GetTask(taskID uint64) taskgraph.Task

type Config

type Config struct {
	OptConf optconfig
	IOConf  ioconfig
}

func Parse

func Parse(buf []byte) (*Config, error)

type KLDivLoss

type KLDivLoss struct {
	V Matrix
	W Matrix
	// contains filtered or unexported fields
}

`KLDivLoss` is a `Function` that evaluates Kullback-Leibler Divergence and the corresponding gradient at the given `Parameter`.

XXX(baigang): matrix layout
  W is vectorized by the mapping W[ I, J ] = W_para[ I * k + J ]
  H is vectorized by the mapping H[ I, J ] = H_para[ I * k + J ]
So actually H is H^T, but it saves code by using identical routine when alternatively optimize over H and W.

func NewKLDivLoss

func NewKLDivLoss(v *pb.MatrixShard, w []*pb.MatrixShard, m, n, k uint32, smooth float32) *KLDivLoss

func (*KLDivLoss) Evaluate

func (l *KLDivLoss) Evaluate(H op.Parameter, gradient op.Parameter) float32

This function evaluates the Kullback-Leibler Divergence given $\mathbf{V} the matrix to fact and $\mathbf{W}$ the fixed factor.

The generalized KL div is:

  $$ D_{KL} = \Sum_{ij} ( V_{ij} log \frac{V_{ij}}{(WH)_{ij}} - V_{ij} + (WH_{ij} )

After removing the redundant constant factor and adding the smooth factor, it becomes:

  $$ L_{kl} = \Sum{ij} ( -V_{ij} log((WH)_{ij} + smooth) + (WH)_{ij} )

The gradient is:

$$ \divsymb \frac{D_{KL}}{H} = -W^T*Z + W^T*\bar{Z} $$
, where $Z_{ij} = \frac{V_{ij}}{(WH)_{ij}}$ and \bar{Z}_{ij}=1

This implementation consists of two pass of visiting the full matrix, each of
which goes parallel. One pass is for evaluating W*H and accumulate kl-divergence
value and the other is for evalutating the matrix gradient of kl-div.

type Matrix

type Matrix struct {
	// contains filtered or unexported fields
}

func (*Matrix) Get

func (self *Matrix) Get(r, c uint32) float32

The use of following functions to access matrxies is NOT recommanded, as it will harm the performace given golang's limited inline support.

func (*Matrix) M

func (self *Matrix) M() uint32

func (*Matrix) N

func (self *Matrix) N() uint32

func (*Matrix) Set

func (self *Matrix) Set(r, c uint32, v float32)

Directories

Path Synopsis
Package proto is a generated protocol buffer package.
Package proto is a generated protocol buffer package.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL