Version: v0.0.0-...-4c91ef0 Latest Latest

This package is not in the latest version of its module.

Go to latest
Published: Oct 28, 2019 License: Apache-2.0 Imports: 33 Imported by: 0



Diviner is a black-box optimization framework that uses Bigmachine to distribute a large number of computationally expensive trials across clusters of machines.

Diviner inherits some of Vizier's [1] core concepts, and adds its own:

- a parameter describes the type and range of values that can parameterize the process under optimization;

- a value is a concrete value for a parameter;

- a metric is a numeric value produced by the process under optimization;

- an objective is a metric to maximize or minimize;

- a trial a set of values and the metrics that result from the process run under these values;

- an oracle is an algorithm that picks a set of trials to perform given a history of trials;

- a black box defines the process under optimization, namely: how to run a trial given a set of values; and finally

- a study is a black box, an objective, and an oracle.

Additionally, a run config may include a dataset, which is a pre-processing step that's required to run the trial itself. Multiple run configs may depend on the same dataset; they are computed at most once for each session.

Studies (and their associated objects) are configured by a configuration script written in the Starlark configuration language [2]. Details of the Starlark builtins provided to create Diviner configurations are described in package github.com/grailbio/diviner/script.

As an example, here's a Diviner configuration that tries to maximize the accuracy of a model, parameterized by the optimizer used and training batch size. It uses an Amazon GPU instance to train the model but creates the dataset on the local machine.

# local defines a system using the local machine with a maximum
# parallelism of 2.
local = localsystem("local", 2)

# gpu defines a system using an AWS EC2 GPU instance; a maximum of
# 100 such instances will be created at any given time.
gpu = ec2system("gpu",

# model_files is the set of (local) files required to run
# training and evaluation for the model.
model_files = ["neural_network.py", "utils.py"]

# create_dataset returns a dataset that's required to
# run the model.
def create_dataset(region):
    url = "s3://grail-datasets/regions%s" % region
    return dataset(
        name="region%s" % region,
        script="makedataset.py %s %s" % (region, url),

# run_model returns a run configuration for a set of
# parameter values.
def run_model(pvalues):
  return run_config(
    datasets = [create_dataset(pvalues["region"])],
      python neural_network.py \
        --batch_size=%(batch_size)d \
    """ % pvalues,

# Define a black box that defines a set of parameters for
# neural_network.py, and provides run configs as above.
# Black boxes return metrics to Diviner by writing them
# to standard output: every line with the prefix
# "METRICS: " is considered to contain metrics for
# the trial. Metrics should be provided as comma-separated
# key-value pairs, e.g., the line:
#	METRICS: acc=0.4,loss=0.9
# indicates that the "acc" metrics has a value of 0.4 and the
# "loss" metric a value of 0.9.
neural_network = black_box(
    "batch_size": discrete(16, 32, 64, 128),
    "optimizer": discrete("adam", "sgd"),
    "region": discrete("chr1", "chr2"),

# Study declares a study that attempts to maximize the "acc"
# metric returned by the model as defined above.

Scripts defined in a trial's run configuration are run on the specified machine under its default user and environment.

With a configuration in hand, the diviner tool is used to conduct trails, and examine study results.


diviner list [-runs] studies...
	List studies available studies or runs.
diviner list -l script.dv [-runs] studies...
	List studies available studies defined in script.dv.
diviner info [-v] [-l script] names...
	Display information for the given study or run names.
diviner metrics id
	Writes all metrics reported by the named run in TSV format.
diviner leaderboard [-objective objective] [-n N] [-values values] [-metrics metrics] studies...
	Display a leaderboard of all trails in the provided studies. The leaderboard
	uses the studies' shared objective unless overridden.
diviner run [-rounds M] [-trials N] [-stream] script.dv [studies]
	Run M rounds of N trials of the studies matching regexp.
	All studies are run if the regexp is omitted. If -stream is
	specified, the study is run in streaming mode: N trials are
	maintained in parallel; new points are queried from the
	study's oracle as needed.
diviner script script.dv study [-param=value...]
	Render a bash script implementing the study from script.dv
	with the provided parameters.
diviner run script.dv runs...
	Re-run previous runs in studies defined in the script.dv.
	This uses parameter values from a previous run(s) and
	re-runs them.
diviner logs [-f] [-since=time] run
	Write the logs for the given run to standard output.
diviner [-db type,name] create-table
	Create the underlying database table required for storing
	Diviner studies and runs.

diviner list [-runs] studies... lists the studies matching the regular expressions given. If -runs is specified then the study's runs are listed instead.

diviner list -l script.dv [-runs] studies... lists information for all of the studies defined in the provided script matching the studies regular expressions. If no patterns are provided, then all studies are shown. If -runs is given, the matching studies' runs are listed instead.

diviner info [-v] [-l script] names... displays detailed information about the matching study or run names. If -v is given then even more verbose output is given. If -l is given, then studies are loaded from the provided script.

diviner metrics id writes all metrics reported by the provided run to standard output in TSV format. Every unique metric name reported over time is a single column; missing values are denoted by "NA".

diviner leaderboard [-objective objective] [-n N] [-values values] [-metrics metrics] studies... displays a leaderboard of all trials matching the provided studies. The leaderboard is ordered by the studies' shared objective unless overridden the -objective flag. Parameter values and additional metrics may be displayed by providing regular expressions to the -values and -metrics flags respectively.

diviner run [-rounds M] [-trials N] [-stream] script.dv [studies] performs trials as defined in the provided script. M rounds of N trials each are performed for each of the studies that matches the argument. If no studies are specified, all studies are run concurrently. If -stream is specified, the study is run in streaming mode: N trials are maintained in parallel; new points are queried from the study's oracle as needed.

diviner run script.dv runs... re-runs one or more runs from studies defined in the provided script. Specifically: parameter values are taken from the named runs and re-launched with the current version of the study from the script.

diviner script script.dv study [-param=value...] renders a bash script containing functions for each of the study's datasets as well as the study itself. This is mostly intended for debugging and development, but may also be used to manually recreate the actions taken by a run.

diviner logs [-f] run writes logs from the named run to standard output. If -f is given, the log is followed and updates are written as they appear.

diviner [-db type,name] create-table creates the underlying database table of the provided type and name (default dynamodb,diviner). This is a one-time setup operation required before using the table.

[1] https://www.kdd.org/kdd2017/papers/view/google-vizier-a-service-for-black-box-optimization [2] https://docs.bazel.build/versions/master/skylark/language.html

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL