Documentation

Overview

Package fastinvsqrt is an implementation of there Quake 3 released algorithm that estimates 1 / sqrt(x) to within 1% and runs about 3 times as fast as the standard calculation on some machines.

note - start here: https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html

IEEE 754

uses IEEE 754 standard floating point number with clever optimizations; uses only normalised numbers

0   0000000 000000000000000000000000
|            |               |--> 23 bit mantissa
|            |-------------> 8 bit exponent
|----------------------> 1 sign bit (0 means positive)

bit representation is

2^23 * E + M (shift E by 23 bits and add M)

decimal number is

(1 + M/2^23) * 2^(E-127)

Mantissa

23 bit mantissa - in binary the mantissa is unique; the only non-zero number before the decimal point is 1 ... so it is always 1 ... this means that the 1 is assumed and does not need to be represented.

Instead of 1 and 22 decimal (or binary places?)

0.00000000000000000000000

we get the full 23 decimal places (or ... whatever)

.000000000000000000000000

Range of numbers is 0 to 2^23-1

Exponent

The exponent is shifted by -127 to allow negative exponents

Instead of

0 .. 255

we get a range of

-127 to 128

Sign Bit

The sign bit is ignored in this algorithm; Real square roots are only for positive numbers ...

References

based on code from quake3 algorithm

Reference: https://www.youtube.com/watch?v=p8u_k2LIZyo

Index

Constants

View Source
const (
	// arbitrary 1 ppt tolerance // TODO - move to config
	DefaultTolerance = OnePPT
)

Variables

This section is empty.

Functions

func AddAny

func AddAny(things ...interface{}) interface{}

func DecodeBits

func DecodeBits(b uint32) float32

    DecodeBits returns an IEEE 754 floating point number.

    func EncodeBits

    func EncodeBits(f float32) uint32

      EncodeBits returns an new bitmapped IEEE 754 float32

      func Fib100Digit

      func Fib100Digit() (big.Int, bool)

      Types

      type Any

      type Any interface{}

      type BitMask32

      type BitMask32 = uint32

        BitMask32 constants are used to mask bit operations in 32 bit numbers.

        MantissaBitMask  BitMask32 = 1<<23 - 1
        ExpBitMask       BitMask32 = (0xFF) << 23
        SignBitMask      BitMask32 = 1 << 31
        All32BitMask     BitMask32 = 1<<32 - 1
        

        Notes:

        ZeroBitMask      BitMask32 = 0b00000000000000000000000000000000
        MantissaBitMask  BitMask32 = 0b00000000011111111111111111111111
        ExpBitMask       BitMask32 = 0b01111111100000000000000000000000
        SignBitMask      BitMask32 = 0b10000000000000000000000000000000
        All32BitMask     BitMask32 = 0b11111111111111111111111111111111
        
        MantissaBitMask  0x00 7F FF FF     -or-         8 388 607
        ExpBitMask       0x7F 80 00 00     -or-     2 139 095 040
        SignBitMask      0x80 00 00 00     -or-     2 147 483 648
        All32BitMask     0xFF FF FF FF     -or-     4 294 967 295
        
        const (
        	MantissaBitMask BitMask32 = 1<<23 - 1
        	ExpBitMask      BitMask32 = (0xFF) << 23
        	SignBitMask     BitMask32 = 1 << 31
        	All32BitMask    BitMask32 = 1<<32 - 1
        )

        type BitMask64

        type BitMask64 = uint64
        const (
        	MantissaBitMask64 BitMask64 = 1<<23 - 1
        	ExpBitMask64      BitMask64 = (0xFF) << 23
        	SignBitMask64     BitMask64 = 1 << 63
        	All32BitMask64    BitMask64 = 1<<64 - 1
        )

        type Bits

        type Bits uint32

          Bits represents a float32 bit pattern in an integer container. This allows for bit shifting and masks.

          i = (data[3]<<0) | (data[2]<<8) | (data[1]<<16) | (data[0]<<24);
          

          Ref: https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html

          func (Bits) Any

          func (b Bits) Any() interface{}

          func (*Bits) Binary

          func (b *Bits) Binary() string

          func (Bits) Bytes

          func (b Bits) Bytes() []byte

            Bytes casts b to a []byte Ref: func (littleEndian) PutUint32(b []byte, v uint32)

            func (Bits) Decode

            func (b Bits) Decode() float32

            func (*Bits) Hex

            func (b *Bits) Hex() string

            func (Bits) Int

            func (b Bits) Int() uint32

            func (Bits) PrintMethods

            func (b Bits) PrintMethods(w io.Writer)

            func (Bits) Shift

            func (b Bits) Shift(n int) uint32

            func (*Bits) String

            func (b *Bits) String() string

            type Hex

            type Hex struct {
            	// contains filtered or unexported fields
            }

            func (*Hex) String

            func (h *Hex) String() string

            type SignBit

            type SignBit = int8

              SignBit represents the sign bit of a floating point number.

              Normal values are 0 or 1; a value of -1 indicates an error, infinity, or NaN.

              const (
              	SignError SignBit = iota - 1 // error, infinity, or NaN
              	Positive                     // sign bit off; number >= 0
              	Negative                     // sign bit on: number < 0
              )

              type ToleranceType

              type ToleranceType = float64

                ToleranceType constants describe allowable tolerances for estimates

                const (
                	FivePercent ToleranceType = 5e-2
                	TwoPercent  ToleranceType = 2e-2
                	OnePercent  ToleranceType = 1e-2
                	OnePPT      ToleranceType = 1e-3
                	OnePPM      ToleranceType = 1e-6
                	OnePPB      ToleranceType = 1e-9
                )

                Directories

                Path Synopsis