Documentation ¶
Index ¶
- func Dump(conf *Config) ([]byte, error)
- func GetFsClient(config *Config) (fs.Client, error)
- func LoadMatrixShard(client fs.Client, path string) (*pb.MatrixShard, error)
- func SaveMatrixShard(client fs.Client, shard *pb.MatrixShard, path string) error
- type BWMFTaskBuilder
- type Config
- type KLDivLoss
- type Matrix
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func LoadMatrixShard ¶
func SaveMatrixShard ¶
Types ¶
type BWMFTaskBuilder ¶
type KLDivLoss ¶
`KLDivLoss` is a `Function` that evaluates Kullback-Leibler Divergence and the corresponding gradient at the given `Parameter`.
XXX(baigang): matrix layout W is vectorized by the mapping W[ I, J ] = W_para[ I * k + J ] H is vectorized by the mapping H[ I, J ] = H_para[ I * k + J ] So actually H is H^T, but it saves code by using identical routine when alternatively optimize over H and W.
func NewKLDivLoss ¶
func NewKLDivLoss(v *pb.MatrixShard, w []*pb.MatrixShard, m, n, k uint32, smooth float32) *KLDivLoss
func (*KLDivLoss) Evaluate ¶
This function evaluates the Kullback-Leibler Divergence given $\mathbf{V} the matrix to fact and $\mathbf{W}$ the fixed factor.
The generalized KL div is: $$ D_{KL} = \Sum_{ij} ( V_{ij} log \frac{V_{ij}}{(WH)_{ij}} - V_{ij} + (WH_{ij} ) After removing the redundant constant factor and adding the smooth factor, it becomes: $$ L_{kl} = \Sum{ij} ( -V_{ij} log((WH)_{ij} + smooth) + (WH)_{ij} ) The gradient is: $$ \divsymb \frac{D_{KL}}{H} = -W^T*Z + W^T*\bar{Z} $$ , where $Z_{ij} = \frac{V_{ij}}{(WH)_{ij}}$ and \bar{Z}_{ij}=1 This implementation consists of two pass of visiting the full matrix, each of which goes parallel. One pass is for evaluating W*H and accumulate kl-divergence value and the other is for evalutating the matrix gradient of kl-div.
Source Files ¶
Click to show internal directories.
Click to hide internal directories.