Categorygithub.com/c3sr/tensorflow
modulepackage
0.3.7
Repository: https://github.com/c3sr/tensorflow.git
Documentation: pkg.go.dev

# README

MLModelScope TensorFlow Agent

Build Status Go Report Card License

This is the TensorFlow agent for MLModelScope, an open-source framework and hardware agnostic, extensible and customizable platform for evaluating and profiling ML models across datasets / frameworks / systems, and within AI application pipelines.

Currently it has most of the models from TensorFlow Model Zoo built in, including Image Classification, Object Detection, Instance Segmentation, Semantic Segmentation, Image Enhancement. More built-in models are coming. One can evaluate the ~80 models on any systems of insterest with either local TensorFlow installation or TensorFlow docker images.

Check out MLModelScope and welcome to contribute.

Bare Minimum Installation

Prerequsite System Library Installation

We first discuss a bare minimum tensorflow-agent installation without the tracing and profiling capabilities. To make this work, you will need to have the following system libraries preinstalled in your system.

  • The CUDA library (required)
  • The CUPTI library (required)
  • The Tensorflow library (required)
  • The libjpeg-turbo library (optional, but preferred)

The CUDA Library

Please refer to Nvidia CUDA library installation on this. Find the location of your local CUDA installation, which is typically at /usr/local/cuda/, and setup the path to the libcublas.so library. Place the following in either your ~/.bashrc or ~/.zshrc file:

export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/lib64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64

The CUPTI Library

Please refer to Nvidia CUPTI library installation on this. Find the location of your local CUPTI installation, which is typically at /usr/local/cuda/extras/CUPTI, and setup the path to the libcupti.so library. Place the following in either your ~/.bashrc or ~/.zshrc file:

export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/extras/CUPTI/lib64

The TensorFlow C Library

The TensorFlow C library is required for the TensorFlow Go package. If you want to use TensorFlow Docker Images (e.g. NVIDIA GPU CLOUD (NGC)) instead, skip this step for now and refer to our later section on this.

You can download pre-built TensorFlow C library from Install TensorFlow for C.

Extract the downloaded archive to /opt/tensorflow/.

tar -C /opt/tensorflow -xzf (downloaded file)

Configure the linker environmental variables since the TensorFlow C library is extracted to a non-system directory. Place the following in either your ~/.bashrc or ~/.zshrc file

Linux

export LIBRARY_PATH=$LIBRARY_PATH:/opt/tensorflow/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/tensorflow/lib

macOS

export LIBRARY_PATH=$LIBRARY_PATH:/opt/tensorflow/lib
export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:/opt/tensorflow/lib

You can test the installed TensorFlow C library using an example C program.

To build the TensorFlow C library from source, refer to TensorFlow in Go .

Use libjpeg-turbo for Image Preprocessing

libjpeg-turbo is a JPEG image codec that uses SIMD instructions (MMX, SSE2, AVX2, NEON, AltiVec) to accelerate baseline JPEG compression and decompression. It outperforms libjpeg by a significant amount.

You need libjpeg installed.

sudo apt-get install libjpeg-dev  

The default is to use libjpeg-turbo, to opt-out, use build tag nolibjpeg.

To install libjpeg-turbo, refer to libjpeg-turbo.

Linux

  export TURBO_VER=2.0.2
  cd /tmp
  wget https://cfhcable.dl.sourceforge.net/project/libjpeg-turbo/${TURBO_VER}/libjpeg-turbo-official_${TURBO_VER}_amd64.deb
  sudo dpkg -i libjpeg-turbo-official_${TURBO_VER}_amd64.deb

macOS

brew install jpeg-turbo

Installation of Go for Compilation

Since we use go for MLModelScope development, it's required to have go installed in your system before proceed.

Please follow Installing Go Compiler to have go installed.

Bare Minimum Tensorflow-agent Installation

Download and install the MLModelScope TensorFlow Agent by running the following command in any location, assuming you have installed go following the above instruction.

go get -v github.com/c3sr/tensorflow

You can then install the dependency packages through go get.

cd $GOPATH/src/github.com/c3sr/tensorflow
go get -u -v ./...

An alternative to install the dependency packages is to use Dep.

dep ensure -v

This installs the dependency in vendor/.

The CGO interface passes go pointers to the C API. There is an error in the CGO runtime. We can disable the error by placing

export GODEBUG=cgocheck=0

in your ~/.bashrc or ~/.zshrc file and then run either source ~/.bashrc or source ~/.zshrc

Build the TensorFlow agent with GPU enabled

cd $GOPATH/src/github.com/c3sr/tensorflow/tensorflow-agent
go build 

Build the TensorFlow agent without GPU or libjpeg-turbo

cd $GOPATH/src/github.com/c3sr/tensorflow/tensorflow-agent
go build -tags="nogpu nolibjpeg" 

If everything is successful, you should have an executable tensorflow-agent binary in the current directory.

Configuration Setup

To run the agent, you need to setup the correct configuration file for the agent. Some of the information may not make perfect sense for all testing scenarios, but they are required and will be needed for later stage testing. Some of the port numbers as specified below can be changed depending on your later setup for those service.

So let's just set them up as is, and worry about the detailed configuration parameter values later.

You must have a carml config file called .carml_config.yml under your home directory. An example config file carml_config.yml.example is in github.com/c3sr/MLModelScope . You can move it to ~/.carml_config.yml.

The following configuration file can be placed in $HOME/.carml_config.yml or can be specified via the --config="path" option.

app:
  name: carml
  debug: true
  verbose: true
  tempdir: ~/data/carml
registry:
  provider: consul
  endpoints:
    - localhost:8500
  timeout: 20s
  serializer: jsonpb
database:
  provider: mongodb
  endpoints:
    - localhost
tracer:
  enabled: true
  provider: jaeger
  endpoints:
    - localhost:9411
  level: FULL_TRACE
logger:
  hooks:
    - syslog

Test Installation

With the configuration and the above bare minimumn installation, you should be ready to test the installation and see how things work.

Here are a few examples. First, make sure we are in the right location

cd $GOPATH/src/github.com/c3sr/tensorflow/tensorflow-agent

To see a list of help

./tensorflow-agent -h

To see a list of models that we can run with this agent

./tensorflow-agent info models

To run an inference using the default DNN model mobilenet_v1_1.0_224 with a default input image.

./tensorflow-agent predict urls --profile=false --publish=false

The above --profile=false --publish=false command parameters tell the agent that we do not want to use profiling capability and publish the results, as we haven't installed the MongoDB database to store profiling data and the tracer service to accept tracing information.

External Service Installation to Enable Tracing and Profiling

We now discuss how to install a few external services that make the agent fully useful in terms of collecting tracing and profiling data.

External Srvices

MLModelScope relies on a few external services. These services provide tracing, registry, and database servers.

These services can be installed and enabled in different ways. We discuss how we use docker below to show how this can be done. You can also not use docker but install those services from either binaries or source codes directly.

Installing Docker

Refer to Install Docker.

On Ubuntu, an easy way is using

curl -fsSL get.docker.com -o get-docker.sh | sudo sh
sudo usermod -aG docker $USER

On macOS, intsall Docker Destop

Starting Trace Server

This service is required.

  • On x86 (e.g. intel) machines, start jaeger by
docker run -d -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 -p5775:5775/udp -p6831:6831/udp -p6832:6832/udp \
  -p5778:5778 -p16686:16686 -p14268:14268 -p9411:9411 jaegertracing/all-in-one:latest
  • On ppc64le (e.g. minsky) machines, start jaeger machine by
docker run -d -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 -p5775:5775/udp -p6831:6831/udp -p6832:6832/udp \
  -p5778:5778 -p16686:16686 -p14268:14268 -p9411:9411 carml/jaeger:ppc64le-latest

The trace server runs on http://localhost:16686

Starting Registry Server

This service is not required if using TensorFlow-agent for local evaluation.

  • On x86 (e.g. intel) machines, start consul by
docker run -p 8500:8500 -p 8600:8600 -d consul
  • On ppc64le (e.g. minsky) machines, start consul by
docker run -p 8500:8500 -p 8600:8600 -d carml/consul:ppc64le-latest

The registry server runs on http://localhost:8500

Starting Database Server

This service is not required if not using database to publish evaluation results.

  • On x86 (e.g. intel) machines, start mongodb by
docker run -p 27017:27017 --restart always -d mongo:3.0

You can also mount the database volume to a local directory using

docker run -p 27017:27017 --restart always -d  -v $HOME/data/carml/mongo:/data/db mongo:3.0

Configuration

You must have a carml config file called .carml_config.yml under your home directory. An example config file ~/.carml_config.yml is already discussed above. Please update the port numbers for the above external services accordingly if you decide to choose a different ports above.

Testing

The testing steps are very similar to those testing we discussed above, except that you can now safely use both the profiling and publishing services.

Use the Agent with the MLModelScope Web UI

./tensorflow-agent serve -l -d -v

Refer to here to run the web UI to interact with the agent.

Use the Agent through Command Line

Run ./tensorflow-agent -h to list the available commands.

Run ./tensorflow-agent info models to list the available models.

Run ./tensorflow-agent predict to evaluate a model. This runs the default evuation. ./tensorflow-agent predict -h shows the available flags you can set.

An example run is

./tensorflow-agent predict urls --trace_level=FRAMEWORK_TRACE --model_name=Inception_v3

Use the Agent through Pre-built Docker Images

We have pre-built docker images on Dockerhub. The images are c3sr/tensorflow-agent:amd64-cpu-latest, c3sr/tensorflow-agent:amd64-gpu-latest. The entrypoint is set as tensorflow-agent thus these images act similar as the command line above.

An example run is

docker run --gpus=all --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --privileged=true \
    --network host \
    -v ~/.carml_config.yml:/root/.carml_config.yml \ 
    -v ~/results:/go/src/github.com/c3sr/tensorflow/results \
    c3sr/tensorflow-agent:amd64-gpu-latest predict urls --trace_level=FRAMEWORK_TRACE --model_name=Inception_v3

NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be insufficient for TensorFlow. NVIDIA recommends the use of the following flags: --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 ...

NOTE: To run with GPU, you need to meet following requirements:

  • Docker >= 19.03 with nvidia-container-toolkit (otherwise need to use nvidia-docker)
  • CUDA >= 10.1
  • NVIDIA Driver >= 418.39

Notes on installing TensorFlow from source (ignore this if you are a user)

Install Bazel

Build

Build TensorFlow 1.14.0 with the following scripts.

go get -d github.com/tensorflow/tensorflow/tensorflow/go
cd ${GOPATH}/src/github.com/tensorflow/tensorflow
git fetch --all
git checkout v1.14.0
./configure

Configure the build and then run

bazel build -c opt //tensorflow:libtensorflow.so
cp ${GOPATH}/src/github.com/tensorflow/tensorflow/bazel-bin/tensorflow/libtensorflow.so /opt/tensorflow/lib

Need to put the directory that contains libtensorflow_framework.so and libtensorflow.so into $PATH.

PowerPC

For TensorFlow compilation, here are the recommended tensorflow-configure settings:

export CC_OPT_FLAGS="-mcpu=power8 -mtune=power8"
export GCC_HOST_COMPILER_PATH=/usr/bin/gcc

ANACONDA_HOME=$(conda info --json | python -c "import sys, json; print json.load(sys.stdin)['default_prefix']")
export PYTHON_BIN_PATH=$ANACONDA_HOME/bin/python
export PYTHON_LIB_PATH=$ANACONDA_HOME/lib/python2.7/site-packages

export USE_DEFAULT_PYTHON_LIB_PATH=0
export TF_NEED_CUDA=1
export TF_CUDA_VERSION=9.0
export CUDA_TOOLKIT_PATH=/usr/local/cuda-9.0
export TF_CUDA_COMPUTE_CAPABILITIES=3.5,3.7,5.2,6.0,7.0
export CUDNN_INSTALL_PATH=/usr/local/cuda-9.0
export TF_CUDNN_VERSION=7
export TF_NEED_GCP=1
export TF_NEED_OPENCL=0
export TF_NEED_HDFS=1
export TF_NEED_JEMALLOC=1
export TF_ENABLE_XLA=1
export TF_CUDA_CLANG=0
export TF_NEED_MKL=0
export TF_NEED_MPI=0
export TF_NEED_VERBS=0
export TF_NEED_GDR=0
export TF_NEED_S3=0

# Packages

No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Functions

Asset loads and returns the asset for the given name.
AssetDir returns the file names below a certain directory embedded in the file by go-bindata.
AssetInfo loads and returns the asset info for the given name.
AssetNames returns the names of the assets.
No description provided by the author
MustAsset is like Asset but panics when Asset would return an error.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
RestoreAsset restores an asset under the given directory.
RestoreAssets restores an asset under the given directory recursively.

# Constants

Normally this is "VISIBLE" unless you are inheriting a different value from another ApiDef.
Hide this op by putting it into an internal namespace (or whatever is appropriate in the target language).
Do not include this op in the generated API.
Publicly visible in the API.
No description provided by the author
No description provided by the author
The operation was aborted, typically due to a concurrency issue like sequencer check failures, transaction aborts, etc.
Some entity that we attempted to create (e.g., file or directory) already exists.
The operation was cancelled (typically by the caller).
Unrecoverable data loss or corruption.
Deadline expired before operation could complete.
An extra enum entry to prevent people from writing code that fails to compile when a new code is added.
Operation was rejected because the system is not in a state required for the operation's execution.
Internal errors.
Client specified an invalid argument.
Some requested entity (e.g., file or directory) was not found.
Not an error; returned on success.
Operation tried to iterate past the valid input range.
The caller does not have permission to execute the specified operation.
Some resource has been exhausted, perhaps a per-user quota, or perhaps the entire file system is out of space.
The request does not have valid authentication credentials for the operation.
The service is currently unavailable.
Operation is not implemented or not supported/enabled in this service.
Unknown error.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Data types that all computation devices are expected to be capable to support.
Do not use! These are only for parameters.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Not a legal value for DataType.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Note: The logging level 10 cannot be named DEBUG.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No optimizations.
L1 is the default level.
No description provided by the author
The following settings turn on compilation, with higher values being more aggressive.
No description provided by the author
Enable some aggressive optimizations that use assumptions that TF graphs may break.
No description provided by the author
The default setting (SCHEDULING and SWAPPING HEURISTICS only).
No description provided by the author
Use any combination of swapping and recomputation heuristics.
Driven by manual op-level annotations.
Disabled in the meta-optimizer.
No description provided by the author
No description provided by the author
No description provided by the author
Recomputation heuristics will recompute ops (such as Relu activation) during backprop instead of storing them, reducing peak memory usage.
Scheduling will split big ops such as AddN and try to enforce a schedule of the new computations that decreases peak memory usage.
Swapping heuristic will move a tensor from the GPU to the CPU and move it back when needed to reduce peak memory usage.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Internal legacy format.
Deprecated format: tf.Saver() which works with tensorflow::table::Table.
Current format: more efficient.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Variables

No description provided by the author
No description provided by the author
Version ...
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Version ...
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Version ...
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author

# Structs

No description provided by the author
An allocation/de-allocation operation performed by the allocator.
No description provided by the author
Used to specify and override the default API & behavior in the generated code for client languages, from what you would get from the OpDef alone.
No description provided by the author
Description of the graph-construction-time configuration of this Op.
If you specify any endpoint, this will replace all of the inherited endpoints.
No description provided by the author
An asset file def for a single file or a set of sharded files with the same name.
Protocol buffer representing the value for an attr used to configure an Op.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
LINT.IfChange.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Describes the metadata related to a checkpointed tensor.
Special header that is associated with a bundle.
Defines a subgraph in another `GraphDef` as a set of feed points and nodes to be fetched or executed.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Defines a TensorFlow cluster as a set of jobs.
CollectionDef should cover most collections.
AnyList is used for collecting Any protos.
No description provided by the author
BytesList is used for collecting strings and serialized protobufs.
No description provided by the author
FloatList is used for collecting float values.
No description provided by the author
Int64List is used for collecting int, int64 and long values.
No description provided by the author
NodeList is used for collecting nodes in graph.
No description provided by the author
Supplies one or more device names as members of the group identified by group_key.
Gives the complete membership of the group identified by group_key.
Supplies data about one collective op belonging to the instance identified by instance_key.
Confirms that every op in the instance has consistently declared itself.
Protocol buffer representing a CondContext object.
Session configuration parameters.
Everything inside Experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
Container for any kind of control flow context.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Inputs of this node.
Outputs of this node.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Protocol buffer representing a CriticalSection.
Protocol buffer representing a CriticalSection execution.
No description provided by the author
No description provided by the author
Options for initializing DebuggerState in TensorFlow Debugger (tfdbg).
Option for watching a node in TensorFlow Debugger (tfdbg).
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Protocol buffer representing an event that happened during the execution of a Brain model.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Options specific to the execution of a single step.
No description provided by the author
No description provided by the author
A function can be instantiated when the runtime can bind every attr with a value.
A library is a set of named functions.
No description provided by the author
No description provided by the author
Request for next agreed-upon step_id for the specified graph_keys.
Next valid step_ids for one or more graph_keys.
No description provided by the author
No description provided by the author
Configuration for breaking down a visible GPU into multiple "virtual" devices.
GradientDef defines the gradient function of a function defined in a function library.
Represents the graph of operations.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Protocol buffer representing a handle to a tensorflow resource.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Serialization format for histogram module in core/lib/histogram/histogram.h.
No description provided by the author
Protocol buffer representing the metadata for an iterator's state stored as a Variant tensor.
Defines a single job in a TensorFlow cluster.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
A collection of KernelDefs.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Out-of-band request to begin or end logging, or to retrieve logs for particular steps.
No description provided by the author
Protocol buffer used for logging messages to the events file.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
For memory tracking.
NOTE: This protocol buffer is evolving, and will go through revisions in the coming months.
Meta information regarding the graph to be exported.
A list of attr names and their values.
No description provided by the author
A pair of tensor name and tensor values.
Records the creation of a new replay session.
No description provided by the author
Time/size stats recorded for a single execution of a graph node.
Output sizes recorded for a single execution of a graph node.
Defines an operation.
For describing inputs and outputs.
Description of the graph-construction-time configuration of this Op.
Information about version-dependent deprecation of an op.
A proto representation of an eager operation.
A collection of OpDefs.
Options passed to the graph optimizer.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Protocol buffer representing a QueueRunner.
For serializing and restoring the state of ReaderBase, see reader_base.h for details.
No description provided by the author
Extra data needed on a non-RDMA RecvBufResponse.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Protocol buffer representing a handle to a tensorflow resource.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Reset() allows misbehaving or slow sessions to be aborted and closed, and causes their resources eventually to be released.
No description provided by the author
Protocol buffer representing a handle to a tensorflow resource.
No description provided by the author
Message to describe custom graph optimizer and its parameters.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Metadata output (i.e., non-Tensor) for a single Run() call.
Options for a single Run() call.
Everything inside Experimental is subject to change and is not subject to API stability guarantees in https://www.tensorflow.org/guide/version_compat.
No description provided by the author
No description provided by the author
SavedModel is the high level serialization format for TensorFlow Models.
Protocol buffer representing the configuration of a Saver.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Defines the configuration of a single TensorFlow server.
Protocol buffer used for logging session state.
SignatureDef defines the signature of a computation supported by a TensorFlow graph.
No description provided by the author
No description provided by the author
A Summary is a set of named values to be displayed by the visualizer.
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
No description provided by the author
Metadata associated with a series of Summary data.
A SummaryMetadata encapsulates information on which plugins are able to make use of a certain summary value.
No description provided by the author
For logging the metadata output for a single session.run() call.
Defines a connection between two tensors in a `GraphDef`.
No description provided by the author
Information about a Tensor necessary for feeding or retrieval.
For sparse tensors, The COO encoding stores a triple of values, indices, and shape.
No description provided by the author
No description provided by the author
Protocol buffer representing a tensor.
Dimensions of a tensor.
One dimension of the tensor.
Can only be interpreted if you know the corresponding TensorShape.
Extent of the slice in one dimension.
No description provided by the author
No description provided by the author
No description provided by the author
Out-of-band request to configure distributed tracing.
No description provided by the author
Protocol buffer representing the values in ControlFlowContext.
Protocol buffer representing a Variable.
Protocol buffer representing the serialization format of DT_VARIANT tensors.
Version information for a piece of serialized data There are different types of versions for each type of data (GraphDef, etc.), but they all have the same common shape described here.
No description provided by the author
No description provided by the author
No description provided by the author
Protocol buffer representing a WhileContext object.
No description provided by the author
No description provided by the author

# Interfaces

EagerServiceClient is the client API for EagerService service.
EagerServiceServer is the server API for EagerService service.
MasterServiceClient is the client API for MasterService service.
MasterServiceServer is the server API for MasterService service.
WorkerServiceClient is the client API for WorkerService service.
WorkerServiceServer is the server API for WorkerService service.

# Type aliases

No description provided by the author
An enum indicating the endianness of the platform that produced this bundle.
The canonical error codes for TensorFlow APIs.
LINT.IfChange.
No description provided by the author
No description provided by the author
Control the use of the compiler/jit.
Optimization level.
No description provided by the author
Enum controlling the number of times to run optimizers.
No description provided by the author
TODO(pbar) Turn this into a TraceOptions proto which allows tracing to be controlled in a more orthogonal manner?.
A version number that identifies a different on-disk checkpoint format.
No description provided by the author
Current health status of a worker.
Indicates the behavior of the worker when an internal error or shutdown signal is received.