TensorFlow：Google开源的一个深度学习框架
TensorFlow是一个采用数据流图（data flow graphs），用于数值计算的开源软件库。节点（Nodes）在图中表示数学操作，图中的线（edges）则表示在节点间相互联系的多维数据数组，即张量（tensor）。 goldiegadde released this
Release 2.0.0rc1
Major Features and Improvements
TensorFlow 2.0 focuses on simplicity and ease of use, featuring updates like:
 Easy model building with Keras and eager execution.
 Robust model deployment in production on any platform.
 Powerful experimentation for research.
 API simplification by reducing duplication and removing deprecated endpoints.
For details on best practices with 2.0, see the Effective 2.0 guide
For information on upgrading your existing TensorFlow 1.x models, please refer to our Upgrade and Migration guides. We have also released a collection of tutorials and getting started guides.
Highlights
 TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several modelbuilding APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout guide for additional details.
 Distribution Strategy: TF 2.0 users will be able to use the
tf.distribute.Strategy
API to distribute training with minimal code changes, yielding great outofthebox performance. It supports distributed training with Keras model.fit, as well as with custom training loops. MultiGPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the guide for more details.  Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a
tf.Session
is discouraged, and replaced by writing regular Python functions. Using thetf.function
decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.  Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0.
compute_gradients
is removed as public API, and use GradientTape to compute gradients.  AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside
tf.function
decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs.  Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.
 API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found here.
 API cleanup, included removing
tf.app
,tf.flags
, andtf.logging
in favor of abslpy.  No more global variables with helper methods like
tf.global_variables_initializer
andtf.get_global_step
.  Add toggles
tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
for enabling/disabling v2 control flow.  Enable v2 control flow as part of
tf.enable_v2_behavior()
andTF2_BEHAVIOR=1
.  Fixes autocomplete for most TensorFlow API references by switching to use relative imports in API
__init__.py
files.
Breaking Changes

Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.

Toolchains:
 TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.
 Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.

tf.contrib
:tf.contrib
has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as tensorflow/addons or tensorflow/io, or removed entirely. Remove
tf.contrib.timeseries
dependency on TF distributions.  Replace contrib references with
tf.estimator.experimental.*
for apis in early_stopping.py.

tf.estimator
: Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use
tf.keras.optimizers
instead of thetf.compat.v1.train.Optimizer
s. If you do not pass in anoptimizer=
arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release, but if you want to avoid any change, switch to the v1 version of the estimator:tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*
.  Default aggregation for canned Estimators is now
SUM_OVER_BATCH_SIZE
. To maintain previous default behavior, please passSUM
as the loss aggregation method.  Canned Estimators don’t support
input_layer_partitioner
arg in the API. If you have this arg, you will have to switch totf.compat.v1 canned Estimators
. Estimator.export_savedmodel
has been renamedexport_saved_model
. When saving to SavedModel, Estimators will strip default op attributes. This is almost always the correct behavior, as it is more forwards compatible, but if you require that default attributes are saved with the model, please use
tf.compat.v1.Estimator
.  Feature Columns have been upgraded to be more Eagerfriendly and to work with Keras. As a result,
tf.feature_column.input_layer
has been deprecated in favor oftf.keras.layers.DenseFeatures
. v1 feature columns have direct analogues in v2 except forshared_embedding_columns
, which are not crosscompatible with v1 and v2. Usetf.feature_column.shared_embeddings
instead.
 Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use

tf.keras
:OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel. HDF5 files are still supported. Deprecated
tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
andtf.keras.models.load_model
instead.  Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow 2, and a warning will be issued that starts with "Layer is casting an input tensor from dtype float64 to the layer's dtype of float32". To fix, either set the default dtype to float64 with
tf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information.

tf.lite
: Removed
lite.OpHint
,lite.experimental
, andlite.constant
from 2.0 API.
 Removed

Tensors are no longer hashable, but instead compare elementwise with
==
and!=
. Usetf.compat.v1.disable_tensor_equality()
to return to the previous behavior. 
Performing equality operations on Tensors or Variables with incompatible shapes an exception is no longer thrown. Instead
__eq__
returns False and__ne__
returns True. 
Removed
tf.string_split
from v2 API. 
Deprecated the use of
constraint=
and.constraint
with ResourceVariable. 
Add
UnifiedGRU
as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU fromhard_sigmoid
tosigmoid
, andreset_after
to True in 2.0. Historically recurrent activation ishard_sigmoid
since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior. 
CUDNN_INSTALL_PATH
,TENSORRT_INSTALL_PATH
,NCCL_INSTALL_PATH
,NCCL_HDR_PATH
are deprecated. UseTF_CUDA_PATHS
instead which supports a commaseparated list of base paths that are searched to find CUDA libraries and headers.
Refer to our public project status tracker and issues tagged with 2.0
on GitHub for insight into recent issues and development progress.
If you experience any snags when using TF 2.0, please let us know at the TF 2.0 Testing User Group. We have a support mailing list as well as weekly testing meetings, and would love to hear your migration feedback and questions.
Bug Fixes and Other Changes

tf.contrib
: Expose
tf.contrib.proto.*
ops intf.io
(they will exist in TF2)
 Expose

tf.data
: Add support for TensorArrays to
tf.data Dataset
.  Integrate Ragged Tensors with
tf.data
.  All core and experimental tf.data transformations that input userdefined functions can span multiple devices now.
 Extending the TF 2.0 support for
shuffle(..., reshuffle_each_iteration=True)
andcache()
to work across different Python iterators for the same dataset.  Removing the
experimental_numa_aware
option fromtf.data.Options
.  Add
num_parallel_reads
and passing in a Dataset containing filenames intoTextLineDataset
andFixedLengthRecordDataset
.  Add support for defaulting the value of
cycle_length
argument oftf.data.Dataset.interleave
to the number of schedulable CPU cores.  Promoting
tf.data.experimental.enumerate_dataset
to core astf.data.Dataset.enumerate
.  Promoting
tf.data.experimental.unbatch
to core astf.data.Dataset.unbatch
.  Adds option for introducing slack in the pipeline to reduce CPU contention, via
tf.data.Options().experimental_slack = True
 Added experimental support for parallel batching to
batch()
andpadded_batch()
. This functionality can be enabled through tf.data.Options()  Support cancellation of longrunning
reduce
.  Now we use
dataset
node name as prefix instead of the op name, to identify the component correctly in metrics, for pipelines with repeated components.  Improve the performance of datasets using
from_tensors()
.  Promoting
unbatch
from experimental to core API.  Adding support for datasets as inputs to
from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.
 Add support for TensorArrays to

tf.distribute
: Enable
tf.distribute.experimental.MultiWorkerMirroredStrategy
working in eager mode.  Callbacks are supported in
MultiWorkerMirroredStrategy
.  Disable
run_eagerly
and distribution strategy if there are symbolic tensors added to the model usingadd_metric
oradd_loss
.  Loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a
tf.distribute.Strategy
.  Set default loss reduction as
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops.AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.  Support for multihost
ncclAllReduce
in Distribution Strategy.
 Enable

tf.estimator
: Replace
tf.contrib.estimator.add_metrics
withtf.estimator.add_metrics
 Use
tf.compat.v1.estimator.inputs
instead oftf.estimator.inputs
 Replace contrib references with
tf.estimator.experimental.*
for apis in early_s in Estimator  Canned Estimators will now use keras optimizers by default. An error will be raised if tf.train.Optimizers are used, and you will have to switch to tf.keras.optimizers or tf.compat.v1 canned Estimators.
 A checkpoint converter for canned Estimators has been provided to transition canned Estimators that are warm started from
tf.train.Optimizers
totf.keras.optimizers
.  Losses are scaled in canned estimator v2 and not in the optimizers anymore. If you are using Estimator + distribution strategy + optimikzer v1 then the behavior does not change. This implies that if you are using custom estimator with optimizer v2, you have to scale losses. We have new utilities to help scale losses
tf.nn.compute_average_loss
,tf.nn.scale_regularization_loss
.
 Replace

tf.keras
: Premade models (including Linear and WideDeep) have been introduced for the purpose of replacing Premade estimators.
 Model saving changes
model.save
andtf.saved_model.save
may now save to the TensorFlow SavedModel format. The model can be restored usingtf.keras.models.load_model
. HDF5 files are still supported, and may be used by specifyingsave_format="h5"
when saving. Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.
 Add support for passing list of lists to the
metrics
argument in Kerascompile
.  Add
tf.keras.layers.AbstractRNNCell
as the preferred implementation for RNN cells in TF v2. User can use it to implement RNN cells with custom behavior.  Keras training and validation curves are shown on the same plot when using the TensorBoard callback.
 Switched Keras
fit/evaluate/predict
execution to use only a single unified path by default unless eager execution has been explicitly disabled, regardless of input type. This unified path places an eagerfriendly training step inside of atf.function
. With this
 All input types are converted to
Dataset
.  The path assumes there is always a distribution strategy. when distribution strategy is not specified the path uses a noop distribution strategy.
 The training step is wrapped in
tf.function
unlessrun_eagerly=True
is set in compile. The single path execution code does not yet support all use cases. We fallback to the existing v1 execution paths if your model contains the following: sample_weight_mode
in compileweighted_metrics
in compile v1 optimizer
 target tensors in compile
If you are experiencing any issues because of this change, please inform us (file an issue) about your use case and you can unblock yourself by settingexperimental_run_tf_function=False
in compile meanwhile. We have seen couple of use cases where the model usage pattern is not as expected and would not work with this change.  output tensors of one layer is used in the constructor of another.
 symbolic tensors outside the scope of the model are used in custom loss functions.
The flag can be disabled for these cases and ideally the usage pattern will need to be fixed.
 Mark Keras
set_session
ascompat.v1
only. tf.keras.estimator.model_to_estimator
now supports exporting totf.train.Checkpoint format
, which allows the saved checkpoints to be compatible withmodel.load_weights
.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed. Add an
implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.  Raise error if
batch_size
argument is used when input is dataset/generator/keras sequence.  Update TF 2.0
keras.backend.name_scope
to use TF 2.0name_scope
.  Add v2 module aliases for losses, metrics, initializers and optimizers:
tf.losses = tf.keras.losses
&tf.metrics = tf.keras.metrics
&tf.initializers = tf.keras.initializers
&tf.optimizers = tf.keras.optimizers
.  Updates binary cross entropy logic in Keras when input is probabilities. Instead of converting probabilities to logits, we are using the cross entropy formula for probabilities.
 Added public APIs for
cumsum
andcumprod
keras backend functions.  Add support for temporal sample weight mode in subclassed models.
 Raise ValueError if an integer is passed to the training APIs.
 Added faulttolerance support for training Keras model via
model.fit()
withMultiWorkerMirroredStrategy
, tutorial available.  Custom Callback tutorial is now available.
 To train with
tf.distribute
, Keras api is recommended over estimator. steps_per_epoch
andsteps
arguments are supported with numpy arrays. New error message when unexpected keys are used in sample_weight/class_weight dictionaries
 Losses are scaled in Keras compile/fit and not in the optimizers anymore. If you are using custom training loop, we have new utilities to help scale losses
tf.nn.compute_average_loss
,tf.nn.scale_regularization_loss
. Layer
apply and add_variable APIs are deprecated. Added support for channels first data format in cross entropy losses with logits and support for tensors with unknown ranks.
 Error messages will be raised if
add_update
,add_metric
,add_loss
, activity regularizers are used inside of a control flow branch.  New loss reduction types:
AUTO
: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used withtf.distribute.Strategy
, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to beSUM
orNONE
. UsingAUTO
in that case will raise an error.NONE
: Weighted losses with one dimension reduced (axis=1, or axis specified by loss function). When this reduction type used with builtin Keras training loops likefit
/evaluate
, the unreduced vector loss is passed to the optimizer but the reported loss will be a scalar value.SUM
: Scalar sum of weighted losses. 4.SUM_OVER_BATCH_SIZE
: ScalarSUM
divided by number of elements in losses. This reduction type is not supported when used withtf.distribute.Strategy
outside of builtin training loops liketf.keras
compile
/fit
.
 Wraps losses passed to the
compile
API (strings and v1 losses) which are not instances of v2Loss
class inLossWrapper
class. => All losses will now useSUM_OVER_BATCH_SIZE
reduction as default. model.add_loss(symbolic_tensor)
should work in ambient eager. Update metric name to always reflect what the user has given in compile. Affects following cases
 When name is given as 'accuracy'/'crossentropy'
 When an aliased function name is used eg. 'mse'
 Removing the
weighted
prefix from weighted metric names.
 Allow nonTensors through v2 losses.
 Add v2 sparse categorical crossentropy metric.
 Add v2 APIs for
AUCCurve
andAUCSummationMethod
enums. add_update
can now be passed a zeroarg callable in order to support turning off the update when settingtrainable=False
on a Layer of a Model compiled withrun_eagerly=True
. Standardize the LayerNormalization API by replacing the args
norm_axis
andparams_axis
withaxis
.  Fixed critical bugs that help with DenseFeatures usability in TF2

tf.lite
: Added evaluation script for
COCO
minival  Add delegate support for
QUANTIZE
.  Add
GATHER
support to NN API delegate.  Added support for TFLiteConverter Python API in 2.0. Contains functions from_saved_model, from_keras_file, and from_concrete_functions.
 Add
EXPAND_DIMS
support to NN API delegate TEST.  Add
narrow_range
attribute to QuantizeAndDequantizeV2 and V3.  Added support for
tflite_convert
command line tool in 2.0.  Posttraining quantization tool supports quantizing weights shared by multiple operations. The models made with versions of this tool will use INT8 types for weights and will only be executable interpreters from this version onwards.
 Posttraining quantization tool supports fp16 weights and GPU delegate acceleration for fp16.
 Add delegate support for
QUANTIZED_16BIT_LSTM
.  Extracts
NNAPIDelegateKernel
from nnapi_delegate.cc
 Added evaluation script for

Other:
 Fix accidental quadratic graph construction cost in graphmode
tf.gradients()
.  ResourceVariable's gather op supports batch dimensions.
 ResourceVariable support for
gather_nd
. ResourceVariable
andVariable
no longer acceptsconstraint
in the constructor, nor expose it as a @Property. Added gradient for
SparseToDense
op.  Expose a flag that allows the number of threads to vary across Python benchmarks.
image.resize
in 2.0 now supports gradients for the new resize kernels.image.resize
now considers proper pixel centers and has new kernels (incl. antialiasing). Renamed
tf.image
functions to remove duplicate "image" where it is redundant.  Variadic reduce is supported on CPU Variadic reduce is supported on CPU
 Remove unused
StringViewVariantWrapper
.  Delete unused
Fingerprint64Map
op registration  Add broadcasting support to
tf.matmul
.  Add C++ Gradient for
BatchMatMulV2
.  Add
tf.math.cumulative_logsumexp
operation.  Add ellipsis (...) support for
tf.einsum()
.  Add expand_composites argument to all
nest.*
methods.  Added
strings.byte_split
.  Add a new "result_type" parameter to
tf.strings.split
.  Add name argument to
tf.string_split
andtf.strings_split
.  Extend
tf.strings.split
to support inputs with any rank.  Added
tf.random.binomial
.  Added
key
andskip
methods torandom.experimental.Generator
.  Extend
tf.function
with basic support for CompositeTensors arguments (such asSparseTensor
andRaggedTensor
). parallel_for.pfor
: add converters for Softmax, LogSoftmax, IsNaN, All, Any, and MatrixSetDiag.parallel_for
: add converters for LowerTriangularSolve and Cholesky.parallel_for
: add converters forLogMatrixDeterminant
andMatrixBandPart
.parallel_for
: Add converter forMatrixDiag
.parallel_for
: Add converters forOneHot
,LowerBound
,UpperBound
.parallel_for
: add converter forBroadcastTo
. Add
pfor
converter forSqueeze
.  Add
RaggedTensor.placeholder()
.  Add ragged tensor support to
tf.squeeze
.  Update RaggedTensors to support int32 row_splits.
 Allow
LinearOperator.solve
to take aLinearOperator
.  Allow all dtypes for
LinearOperatorCirculant
.  Introduce MaxParallelism method
 Add
LinearOperatorHouseholder
.  Adds Philox support to new stateful RNG's XLA path.
 Add
TensorSpec
support for CompositeTensors.  Add
tf.linalg.tridiagonal_solve
op.  Added partial_pivoting input parameter to
tf.linalg.tridiagonal_solve
.  Added gradient to
tf.linalg.tridiagonal_solve
.  Add
tf.linalg.tridiagonal_mul op
.  Added GPU implementation of
tf.linalg.tridiagonal_matmul
.  Add
LinearOperatorToeplitz
.  Upgraded LIBXSMM to version 1.11.
 Uniform processing of quantized embeddings by Gather and EmbeddingLookup Ops
 Correct a misstatement in the documentation of the sparse softmax cross entropy logit parameter.
 Add
tf.ragged.boolean_mask
. tf.switch_case
added, which selects a branch_fn based on a branch_index. The C++ kernel of gather op supports batch dimensions.
 Fixed default value and documentation for
trainable
arg of tf.Variable.  EagerTensor now supports numpy buffer interface for tensors.
 This change bumps the version number of the FullyConnected Op to 5.
 Added new op:
tf.strings.unsorted_segment_join
.  Add HW acceleration support for
topK_v2
.  CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0.
 Expose
Head
as public API.  Added
tf.sparse.from_dense
utility function.  Improved ragged tensor support in
TensorFlowTestCase
.  Added a function
nested_value_rowids
for ragged tensors.  Add
tf.ragged.stack
.  Makes the anormal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.
ResizeInputTensor
now works for all delegates.tf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources. Add support of local soft device placement for eager op.
 Pass partial_pivoting to the
_TridiagonalSolveGrad
.  Add HW acceleration support for
LogSoftMax
.  Add guard to avoid acceleration of L2 Normalization with input rank != 4
 Fix memory allocation problem when calling
AddNewInputConstantTensor
.  Delegate application failure leaves interpreter in valid state
tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. tf.cond, tf.while and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec nonV2 control flow.
 Fix potential security vulnerability where decoding variant tensors from proto could result in heap out of bounds memory access.
 Only create a GCS directory object if the object does not already exist.
 Introduce
dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in thecall
method.  Begin adding Go wrapper for C Eager API.
 XLA HLO graphs can be inspected with interactive_graphviz tool now.
 Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
 Add
batch_dims
argument totf.gather
.  The behavior of
tf.gather
is now correct when axis=None and batch_dims<0.  Update docstring for gather to properly describe the nonempty
batch_dims
case.  Removing of dtype in the constructor of initializers and partition_info in call.
 Add
tf.math.nextafter
op.  Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with
define=tensorflow_mkldnn_contraction_kernel=0
. tf.linspace(start, stop, num)
now always uses "stop" as last value (for num > 1) Added topk to precision and recall to keras metrics.
 Add a ragged size op and register it to the op dispatcher
 Transitive dependencies on :
pooling_ops
were removed. Some users may need to add explicit dependencies on :pooling_ops
if they reference the operators from that library.  Add
CompositeTensor
base class.  Malformed gif images could result in an access out of bounds in the color palette of the frame. This has been fixed now
 Add templates and interfaces for creating lookup tables
Tensor::UnsafeCopyFromInternal
deprecated in favorTensor::BitcastFrom
. In
map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.  Add variant wrapper for
absl::string_view
.  Add OpKernels for some stateless maps.
 DType is no longer convertible to an int. Use
dtype.as_datatype_enum
instead ofint(dtype)
to get the same result.  Support both binary and 1/1 label input in v2 hinge and squared hinge losses.
 Added
LinearOperator.adjoint
andLinearOperator.H
(alias).  Expose CriticalSection in core as
tf.CriticalSection
.  Enhanced graphviz output.
 Add opkernel templates for common table operations.
 Fix callbacks do not log values in eager mode when a deferred build model is used.
SignatureDef
util functions have been deprecated. Update
Fingerprint64Map
to use aliases  Add legacy string flat hash map op kernels.
 Add support for
add_metric
in the graph function mode.  Updating cosine similarity loss  removed the negate sign from cosine similarity.
 Changed default for gradient accumulation for TPU embeddings to true.
 Adds summary trace API for collecting graph and profile information.
 Fix accidental quadratic graph construction cost in graphmode
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
1e100, a6802739, Abolfazl Shahbazi, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Amit, Amit Srivastava, Andy Craze, Anshuman Tripathy, Armen Poghosov, armenpoghosov, Arpit Shah, Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Bairen Yi, Ben Barsdell, Bhavani Subramanian, Brandon Carter, candy.dc, Chao Liu, Clayne Robison, csukuangfj, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Dave Airlie, David Norman, Dayananda V, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Drew Szurko, Duncan Riach, Fei Hu, Felix Lemke, Filip Matzner, fo40225, frreiss, Gautam, gehring, Grzegorz George Pawelczak, Grzegorz Pawelczak, HanGuo97, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, IHong Jhuo, Ilango R, Innovimax, Jacky Ko, Jakub Lipinski, jcf94, Jeff Poznanovic, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Jonas Rauber, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K. Hodges, kaixih, Karl Lessard, Karl Weinmeister, Kashif Rasul, kjopek, KoanSin Tan, kouml, ktaebum, Laurent Le Brun, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, Mahmoud Abuzaina, manhyuk, Marco Gaido, Marek Drozdowski, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mike Arpaia, nammbash, Natalia Gimelshein, NayanaIbm, neargye, Nehal J Wani, Niels Ole Salscheider, Niranjan Hasabnis, Nutti, olicht, P Sudeepam, Paige Bailey, Palmer Lao, Pariksheet Pinjari, Pavel Samolysov, Pooya Davoodi, Ryan Jiang, Samantha Andow, Sami Kama, Saurabh Deoras, Shahzad Lone, Shashi, Siju, Siju Samuel, SneaseAbq, Spencer Schaber, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Supriya Rao, Taylor Jakobson, Taylor Thornton, ThisIsPIRI, Thomas Deegan, tomguluson92, Tongxuan Liu, Vagif, vcarpani, Vikram Tiwari, Vishwak Srinivasan, VitorAlves, wangsiyu, WeberXie, WeijieSun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Yan Facai (颜发才), ymodak, Yong Tang, Younes Khoudli, Yuan Lin, YvesNoel Weweler, zjjott, 卜居, 王振华 (Wang Zhenhua),
4d55397500, a6802739, Abdullah Selek, abenmao, Adam Richter, Ag Ramesh, Albin Joy, Alex, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, amoitra, Andreas Eberle, Andrew Lihonosov, Anthony Hsu, Anthony Platanios, Anuj Rawat, arp95, Arpit Shah, Astropeak, Augustina Ragwitz, Aurelien Geron, AuréLien Geron, avasid, aweers, Ayush Agrawal, Bas Aarts, Bastian Eichenberger, Bayberry Z, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bin Fan, blairhan, BléNesi Attila, BodinE, Bryan Cutler, Cao Zongyan, Casper Da CostaLuis, Chen Guoyin, chenchc, chengchingwen, chie8842, Christian Hansen, Christoph Boeddeker, Christopher Yeh, Clayne Robison, Coady, Patrick, crafet, ctiijima, Daniel Rasmussen, Daniel Salvadori, David Norman, delock, Denis Khalikov, Deven Desai, Diego Caballero, Donovan Ong, Duncan Dean, Duncan Riach, Dustin Neighly, Dwight J Lyle, Eamon ItoFisher, eashtian3, Edward Forgacs, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Evgeniy Polyakov, Fangjun Kuang, Federico Martinez, Fei Hu, Filip Matzner, FlashTek, fo40225, formath, FrançOis Chollet, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, Geoffrey Irving, George Grzegorz Pawelczak, George Sterpu, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, GyoungYoon Ryoo, haison, Hanton Yang, Haraldur TóMas HallgríMsson, Huan Li (李卓桓), HåKon Sandsmark, IHong, Ilham Firdausi Putra, Imran Salam, Irene Dea, Ivan Habernal, Jacky, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, jer, Jeroen BéDorf, jerryyin, jhalakp, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Jonathan, Joon, Josh Beal, Julian Niedermeier, Junqin Zhang, Justin Dujardin, Justin Tunis, Kaixi Hou, Karthik Muthuraman, Kay Zhu, KbhuteIbm, KDR, Keno Fischer, Kevin Mader, khanhlvg, Kilaru Yasaswi Sri Chandra Gandhi, Koock Yoon, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, LeslieFang, Letian Kang, Li, Guizi, Lukas Folle, Lukas Geiger, luxupu, lvli, Ma, Guokai, Mahmoud Abuzaina, Maksym Kysylov, Mandar Deshpande, Manraj Singh Grover, Margaret MaynardReid, Mark Ryan, Matt Conley, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, Mihail Salnikov, Mikalai Drabovich, Mike Holcomb, minds, monklof, Moses Marin, mpppk, Mr. Metal, MshrH, musikisomorphie, nammbash, Nathan Luehr, Nayana Thorat, Neeraj Pradhan, Neil, Nick, Nick Lewycky, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, Nuka137, ocjosen, omeir1, P Sudeepam, Pan Daoxin, Pariksheet Pinjari, Pasquale Minervini, Patrick J. Lopresti, Patrik Gustavsson, Pavel Akhtyamov, PENGWA, per1234, PeterLee, Phan Van Nguyen Duc, Philipp Jund, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, R S Nikhil Krishna, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, robert, Rohit Gupta, Roland Zimmermann, Roman Soldatow, RonLek, Ruizhe, Ryan Jiang, saishruthi, Saleem Abdulrasool, Sami Kama, SanaDamani, sdamani, Sean Morgan, seanshpark, Sebastien Iooss, ServInc, Severen Redwood, Shashank Gupta, shashvat, Shashvat Chand Shahi, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, sremedios, Srini511, srinivasan.narayanamoorthy, Subin, Sumesh Udayakumaran, Sungmann Cho, sunway513, sxwang, TaeHwan Jung, Taehoon Lee, Takeo Sawada, Taylor Jakobson, Ted Chang, TengLu, terryky, ThisIsIsaac, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Till Hoffmann, Tim Zaman, Tongxuan Liu, Trent Lo, Trevor Morris, TungJerry, Tyorden, Uday Bondhugula, v1incent, Vasileios Lioutas, vbvg2008, Vijay Ravichandran, Viktor Gal, Vincent, Vishnuvardhan Janapati, Vivek Suryamurthy, wangsiyu, wateryzephyr, Wei Wang, WenHeng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xiaoquan Kong, Xin, Xinping Wang, YannYy, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Yuan (Terry) Tang, Yuchen Ying, zhangyujing, zyeric, 王振华 (Zhenhua Wang), 黄鑫
Assets
2
Release 1.15.0rc0
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
Major Features and Improvements
 As announced,
tensorflow
pip package will by default include GPU support (same astensorflowgpu
now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs.tensorflowgpu
will still be available, and CPUonly packages can be downloaded attensorflowcpu
for users who are concerned about package size.  TensorFlow 1.15 contains a complete implementation of the 2.0 API in its
compat.v2 module
. It contains a copy of the 1.15 main module (withoutcontrib
) in thecompat.v1 module
. TensorFlow 1.15 is able to emulate 2.0 behavior using theenable_v2_behavior()
function.
This enables writing forward compatible code: by explicitly importing eithertensorflow.compat.v1
ortensorflow.compat.v2
, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.  EagerTensor now supports numpy buffer interface for tensors.
 Add toggles
tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
for enabling/disabling v2 control flow.  Enable v2 control flow as part of
tf.enable_v2_behavior()
andTF2_BEHAVIOR=1
.  AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside
tf.function
decorated functions. AutoGraph is also applied in functions used withtf.data
,tf.distribute
andtf.keras
APIS.  Adds
enable_tensor_equality()
, which switches the behavior such that: Tensors are no longer hashable.
 Tensors can be compared with == and !=, yielding a Boolean Tensor with elementwise comparison results. This will be the default behavior in 2.0.
Breaking Changes
 Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation.
 TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.
 Deprecated the use of
constraint=
and.constraint
with ResourceVariable. tf.keras
:OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed. Layers now default to
float32
, and automatically cast their inputs to the layer's dtype. If you had a model that usedfloat64
, it will probably silently usefloat32
in TensorFlow2, and a warning will be issued that starts with Layer "layername" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 withtf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information.  Some
tf.assert_*
methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked nonfeedable. In other words, if they are used as keys infeed_dict
argument tosession.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different perop random seeds when they are not given explicitly (most often).
Bug Fixes and Other Changes
tf.data
: Promoting
unbatch
from experimental to core API.  Adding support for datasets as inputs to
from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.
 Promoting
tf.keras
:tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible withmodel.load_weights
. Saving a Keras Model using
tf.saved_model.save
now saves the list of variables, trainable variables, regularization losses, and the call function.  Deprecated
tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
andtf.keras.models.load_model
instead.  Add an
implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.  Enable the Keras compile API
experimental_run_tf_function
flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted toDataset
. 2. When distribution strategy is not specified this goes through the noop distribution strategy path. 3. Execution is wrapped in tf.function unlessrun_eagerly=True
is set in compile.  Raise error if
batch_size
argument is used when input is dataset/generator/keras sequence.
tf.lite
 Add
GATHER
support to NN API delegate.  tflite object detection script has a debug mode.
 Add delegate support for
QUANTIZE
.  Added evaluation script for COCO minival.
 Add delegate support for
QUANTIZED_16BIT_LSTM
.  Converts hardswish subgraphs into atomic ops.
 Add
 Add support for defaulting the value of
cycle_length
argument oftf.data.Dataset.interleave
to the number of schedulable CPU cores. parallel_for
: Add converter forMatrixDiag
. Add
narrow_range
attribute toQuantizeAndDequantizeV2
and V3.  Added new op:
tf.strings.unsorted_segment_join
.  Add HW acceleration support for
topK_v2
.  Add new
TypeSpec
classes.  CloudBigtable version updated to v0.10.0.
 Expose
Head
as public API.  Update docstring for gather to properly describe the nonempty
batch_dims
case.  Added
tf.sparse.from_dense
utility function.  Improved ragged tensor support in
TensorFlowTestCase
.  Makes the anormal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.
ResizeInputTensor
now works for all delegates. Add
EXPAND_DIMS
support to NN API delegate TEST: expand_dims_test tf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources.tf.cond
,tf.while
andif
andwhile
in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect nonV2 control flow.tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. Refactors code in Quant8 LSTM support to reduce TFLite binary size.
 Add support of local soft device placement for eager op.
 Add HW acceleration support for
LogSoftMax
.  Added a function
nested_value_rowids
for ragged tensors.  Add guard to avoid acceleration of L2 Normalization with input rank != 4
 Add
tf.math.cumulative_logsumexp operation
.  Add
tf.ragged.stack
.  Fix memory allocation problem when calling
AddNewInputConstantTensor
.  Delegate application failure leaves interpreter in valid state.
 Add check for correct memory alignment to
MemoryAllocation::MemoryAllocation()
.  Extracts
NNAPIDelegateKernel
from nnapi_delegate.cc  Added support for
FusedBatchNormV3
in converter.  A ragged to dense op for directly calculating tensors.
 Fix accidental quadratic graph construction cost in graphmode
tf.gradients()
.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, CaptainPool, Casper Da CostaLuis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon ItoFisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TóMas HallgríMsson, HarikrishnanBalagopal, HåKon Sandsmark, IHong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BéDorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, KbhuteIbm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, LeslieFang, LeslieFangIntel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret MaynardReid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, SanaDamani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, TaeHwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, WenHeng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, YannYy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, 王振华 (Zhenhua Wang)
Assets
2
Release 2.0.0rc0
Major Features and Improvements
TensorFlow 2.0 focuses on simplicity and ease of use, featuring updates like:
 Easy model building with Keras and eager execution.
 Robust model deployment in production on any platform.
 Powerful experimentation for research.
 API simplification by reducing duplication and removing deprecated endpoints.
For details on best practices with 2.0, see the Effective 2.0 guide
For information on upgrading your existing TensorFlow 1.x models, please refer to our Upgrade and Migration guides. We have also released a collection of tutorials and getting started guides.
Highlights
 TF 2.0 delivers Keras as the central high level API used to build and train models. Keras provides several modelbuilding APIs such as Sequential, Functional, and Subclassing along with eager execution, for immediate iteration and intuitive debugging, and tf.data, for building scalable input pipelines. Checkout guide for additional details.
 Distribution Strategy: TF 2.0 users will be able to use the
tf.distribute.Strategy
API to distribute training with minimal code changes, yielding great outofthebox performance. It supports distributed training with Keras model.fit, as well as with custom training loops. MultiGPU support is available, along with experimental support for multi worker and Cloud TPUs. Check out the guide for more details.  Functions, not Sessions. The traditional declarative programming model of building a graph and executing it via a
tf.Session
is discouraged, and replaced with by writing regular Python functions. Using thetf.function
decorator, such functions can be turned into graphs which can be executed remotely, serialized, and optimized for performance.  Unification of tf.train.Optimizers and tf.keras.Optimizers. Use tf.keras.Optimizers for TF2.0.
compute_gradients
is removed as public API, and use GradientTape to compute gradients.  AutoGraph translates Python control flow into TensorFlow expressions, allowing users to write regular Python inside
tf.function
decorated functions. AutoGraph is also applied in functions used with tf.data, tf.distribute and tf.keras APIs.  Unification of exchange formats to SavedModel. All TensorFlow ecosystem projects (TensorFlow Lite, TensorFlow JS, TensorFlow Serving, TensorFlow Hub) accept SavedModels. Model state should be saved to and restored from SavedModels.
 API Changes: Many API symbols have been renamed or removed, and argument names have changed. Many of these changes are motivated by consistency and clarity. The 1.x API remains available in the compat.v1 module. A list of all symbol changes can be found here.
 API cleanup, included removing
tf.app
,tf.flags
, andtf.logging
in favor of abslpy.  No more global variables with helper methods like
tf.global_variables_initializer
andtf.get_global_step
.
Breaking Changes
 Many backwards incompatible API changes have been made to clean up the APIs and make them more consistent.
tf.contrib
has been deprecated, and functionality has been either migrated to the core TensorFlow API, to an ecosystem project such as tensorflow/addons or tensorflow/io, or removed entirely. Premade estimators in the tf.estimator.DNN/Linear/DNNLinearCombined family have been updated to use
tf.keras.optimizers
instead of thetf.compat.v1.train.Optimizer
s. If you do not pass in anoptimizer=
arg or if you use a string, the premade estimator will use the Keras optimizer. This is checkpoint breaking, as the optimizers have separate variables. A checkpoint converter tool for converting optimizers is included with the release, but if you want to avoid any change, switch to the v1 version of the estimator:tf.compat.v1.estimator.DNN/Linear/DNNLinearCombined*
.  The equality operation on Tensors & Variables now compares on value instead of
id()
. As a result, both Tensors & Variables are no longer hashable types.  Layers now default to float32, and automatically cast their inputs to the layer's dtype. If you had a model that used float64, it will probably silently use float32 in TensorFlow 2, and a warning will be issued that starts with "Layer is casting an input tensor from dtype float64 to the layer's dtype of float32". To fix, either set the default dtype to float64 with
tf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information.
Refer to our public project status tracker and issues tagged with 2.0
on GitHub for insight into recent issues and development progress.
If you experience any snags when using TF 2.0, please let us know at the TF 2.0 Testing User Group. We have a support mailing list as well as weekly testing meetings, and would love to hear your migration feedback and questions.
Bug Fixes and Other Changes

tf.data
: Add support for TensorArrays to
tf.data Dataset
.  Integrate Ragged Tensors with
tf.data
.  All core and experimental tf.data transformations that input userdefined functions can span multiple devices now.
 Extending the TF 2.0 support for
shuffle(..., reshuffle_each_iteration=True)
andcache()
to work across different Python iterators for the same dataset.  Removing the
experimental_numa_aware
option fromtf.data.Options
.  Add
num_parallel_reads
and passing in a Dataset containing filenames intoTextLineDataset
andFixedLengthRecordDataset
.  Add support for defaulting the value of
cycle_length
argument oftf.data.Dataset.interleave
to the number of schedulable CPU cores.  Promoting
tf.data.experimental.enumerate_dataset
to core astf.data.Dataset.enumerate
.  Promoting
tf.data.experimental.unbatch
to core astf.data.Dataset.unbatch
.  Adds option for introducing slack in the pipeline to reduce CPU contention, via
tf.data.Options().experimental_slack = True
 Added experimental support for parallel batching to
batch()
andpadded_batch()
. This functionality can be enabled through tf.data.Options()  Support cancellation of longrunning
reduce
.  Now we use
dataset
node name as prefix instead of the op name, to identify the component correctly in metrics, for pipelines with repeated components.
 Add support for TensorArrays to

tf.distribute
: Enable
tf.distribute.experimental.MultiWorkerMirroredStrategy
working in eager mode.  Disable
run_eagerly
and distribution strategy if there are symbolic tensors added to the model usingadd_metric
oradd_loss
.  Bug fix: loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a
tf.distribute.Strategy
.  Set default loss reduction as
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops.AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.  Support for multihost
ncclAllReduce
in Distribution Strategy.
 Enable

tf.estimator
: Replace
tf.contrib.estimator.add_metrics
withtf.estimator.add_metrics
 Use
tf.compat.v1.estimator.inputs
instead oftf.estimator.inputs
 Replace contrib references with tf.estimator.experimental.* for apis in early_s in Estimator
 Canned Estimators will now use keras optimizers by default. An error will be raised if tf.train.Optimizers are used, and you will have to switch to tf.keras.optimizers or tf.compat.v1 canned Estimators.
 A checkpoint converter for canned Estimators has been provided to transition canned Estimators that are warm started from tf.train.Optimizers to tf.keras.optimizers.
 Default aggregation for canned Estimators is now SUM_OVER_BATCH_SIZE. To maintain previous default behavior, please pass SUM as the loss aggregation method.
 Canned Estimators don’t support
input_layer_partitioner
arg in the API. If you have this arg, you will have to switch to tf.compat.v1 canned Estimators.  Estimator.export_savedmodel has been renamed export_saved_model
 When saving to SavedModel, Estimators will strip default op attributes. This is almost always the correct behavior, as it is more forwards compatible, but if you require that default attributes are saved with the model, please use tf.compat.v1.Estimator
 Feature Columns have been upgraded to be more Eagerfriendly and to work with Keras. As a result, tf.feature_column.input_layer has been deprecated in favor of tf.keras.layers.DenseFeatures. v1 feature columns have direct analogues in v2 except for shared_embedding_columns, which are not crosscompatible with v1 and v2. Use tf.feature_column.shared_embeddings instead.
 Losses are scaled in canned estimator v2 and not in the optimizers anymore. If you are using Estimator + distribution strategy + optimikzer v1 then the behavior does not change. This implies that if you are using custom estimator with optimizer v2, you have to scale losses. We have new utilities to help scale losses
tf.nn.compute_average_loss
,tf.nn.scale_regularization_loss
.
 Replace

tf.keras
: Premade models (including Linear and WideDeep) have been introduced for the purpose of replacing Premade estimators.
 Model saving changes
model.save
andtf.saved_model.save
may now save to the TensorFlow SavedModel format. The model can be restored usingtf.keras.models.load_model
. HDF5 files are still supported, and may be used by specifyingsave_format="h5"
when saving.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel. HDF5 files are still supported. Deprecated
tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
and tf.keras.models.load_model` instead.  Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.
 Add support for passing list of lists to the
metrics
argument in Keras `compile.  Add
tf.keras.layers.AbstractRNNCell
as the preferred implementation for RNN cells in TF v2. User can use it to implement RNN cells with custom behavior.  Keras training and validation curves are shown on the same plot when using the TensorBoard callback.
 Switched Keras
fit/evaluate/predict
execution to use only a single unified path by default unless eager execution has been explicitly disabled, regardless of input type. This unified path places an eagerfriendly training step inside of atf.function
. With this 1. All input types are converted toDataset
. 2. The path assumes there is always a distribution strategy. when distribution strategy is not specified the path uses a noop distribution strategy. 3. The training step is wrapped in tf.function unlessrun_eagerly=True
is set in compile. The single path execution code does not yet support all use cases. We fallback to the existing v1 execution paths if your model contains the following: 1. sample_weight_mode in compile 2. weighted_metrics in compile 3. v1 optimizer 4. target tensors in compile. If you are experiencing any issues because of this change, please inform us (file an issue) about your use case and you can unblock yourself by settingexperimental_run_tf_function=False
in compile meanwhile. We have seen couple of use cases where the model usage pattern is not as expected and would not work with this change. 1. output tensors of one layer is used in the constructor of another. 2. symbolic tensors outside the scope of the model are used in custom loss functions. The flag can be disabled for these cases and ideally the usage pattern will need to be fixed. OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs. Mark Keras
set_session
ascompat.v1
only. tf.keras.estimator.model_to_estimator
now supports exporting totf.train.Checkpoint format
, which allows the saved checkpoints to be compatible withmodel.load_weights
.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed. Add an
implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.  Raise error if
batch_size
argument is used when input is dataset/generator/keras sequence.  Update TF 2.0
keras.backend.name_scope
to use TF 2.0name_scope
.  Add v2 module aliases for losses, metrics, initializers and optimizers:
tf.losses = tf.keras.losses
&tf.metrics = tf.keras.metrics
&tf.initializers = tf.keras.initializers
&tf.optimizers = tf.keras.optimizers
.  Updates binary cross entropy logic in Keras when input is probabilities. Instead of converting probabilities to logits, we are using the cross entropy formula for probabilities.
 Added public APIs for
cumsum
andcumprod
keras backend functions.  Add support for temporal sample weight mode in subclassed models.
 Raise ValueError if an integer is passed to the training APIs.
 Added faulttolerance support for training Keras model via
model.fit()
withMultiWorkerMirroredStrategy
, tutorial available.  Callbacks are supported in
MultiWorkerMirroredStrategy
.  Custom Callback tutorial is now available.
 To train with
tf.distribute
, Keras api is recommended over estimator. steps_per_epoch
andsteps
arguments are supported with numpy arrays. New error message when unexpected keys are used in sample_weight/class_weight dictionaries
 Losses are scaled in Keras compile/fit and not in the optimizers anymore. If you are using custom training loop, we have new utilities to help scale losses
tf.nn.compute_average_loss
,tf.nn.scale_regularization_loss
. Layer
apply and add_variable APIs are deprecated. Added support for channels first data format in cross entropy losses with logits and support for tensors with unknown ranks.
 Error messages will be raised if
add_update
,add_metric
,add_loss
, activity regularizers are used inside of a control flow branch.  New loss reduction types: 1.
AUTO
: Indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used withtf.distribute.Strategy
, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to beSUM
orNONE
. UsingAUTO
in that case will raise an error. 2.NONE
: Weighted losses with one dimension reduced (axis=1, or axis specified by loss function). When this reduction type used with builtin Keras training loops likefit
/evaluate
, the unreduced vector loss is passed to the optimizer but the reported loss will be a scalar value. 3.SUM
: Scalar sum of weighted losses. 4.SUM_OVER_BATCH_SIZE
: ScalarSUM
divided by number of elements in losses. This reduction type is not supported when used withtf.distribute.Strategy
outside of builtin training loops liketf.keras
compile
/fit
.

tf.lite
: Added support for TFLiteConverter Python API in 2.0. Contains functions from_saved_model, from_keras_file, and from_concrete_functions.
 Removed
lite.OpHint
,lite.experimental
, andlite.constant
from 2.0 API.  Added support for
tflite_convert
command line tool in 2.0.  Posttraining quantization tool supports quantizing weights shared by multiple operations. The models made with versions of this tool will use INT8 types for weights and will only be executable interpreters from this version onwards.
 Posttraining quantization tool supports fp16 weights and GPU delegate acceleration for fp16.

tf.contrib
: Expose
tf.contrib.proto.*
ops intf.io
(they will exist in TF2)  Remove
tf.contrib.timeseries
dependency on TF distributions.  Replace contrib references with
tf.estimator.experimental.*
for apis in early_stopping.py
 Expose

Other:
 Bug fix for
tf.tile gradient
.  TF code now resides in
tensorflow_core
andtensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparent  Added gradient for
SparseToDense
op.  Expose a flag that allows the number of threads to vary across Python benchmarks.
 ResourceVariable's gather op supports batch dimensions.
image.resize
in 2.0 now supports gradients for the new resize kernels. removed
tf.string_split
from v2 API  Variadic reduce is supported on CPU Variadic reduce is supported on CPU
 Added GPU implementation of
tf.linalg.tridiagonal_solve
.  Delete unused lookup table code
 Remove unused
StringViewVariantWrapper
.  Delete unused
Fingerprint64Map
op registration  Add broadcasting support to
tf.matmul
.  Add ellipsis (...) support for
tf.einsum()
.  ResourceVariable support for
gather_nd
.  Add expand_composites argument to all nest.* methods.
 Standardize the LayerNormalization API by replacing the args
norm_axis
andparams_axis
withaxis
.  Add a new "result_type" parameter to
tf.strings.split
add_update
can now be passed a zeroarg callable in order to support turning off the update when settingtrainable=False
on a Layer of a Model compiled withrun_eagerly=True
. Added
tf.random.binomial
.  Extend
tf.function
with basic support for CompositeTensors arguments (such as SparseTensor and RaggedTensor).  Add name argument to
tf.string_split
andtf.strings_split
.  Added
strings.byte_split
.  CUDNN_INSTALL_PATH, TENSORRT_INSTALL_PATH, NCCL_INSTALL_PATH, NCCL_HDR_PATH are deprecated. Use TF_CUDA_PATHS instead which supports a commaseparated list of base paths that are searched to find CUDA libraries and headers.
 Add
RaggedTensor.placeholder()
.  Add pfor converter for
Squeeze
.  Renamed
tf.image
functions to remove duplicate "image" where it is redundant.  Add C++ Gradient for BatchMatMulV2.
parallel_for.pfor
: add converters for Softmax, LogSoftmax, IsNaN, All, Any, and MatrixSetDiag.parallel_for
: add converters for LowerTriangularSolve and Cholesky. Add ragged tensor support to
tf.squeeze
.  Allow
LinearOperator.solve
to take aLinearOperator
.  Allow all dtypes for
LinearOperatorCirculant
.  Introduce MaxParallelism method
parallel_for
: add converter forBroadcastTo
. Add
LinearOperatorHouseholder
.  Added
key
andskip
methods torandom.experimental.Generator
.  Adds Philox support to new stateful RNG's XLA path.
 Update RaggedTensors to support int32 row_splits.
 Add
TensorSpec
support for CompositeTensors.  Added partial_pivoting input parameter to
tf.linalg.tridiagonal_solve
.  Extend
tf.strings.split
to support inputs with any rank  Removing the
experimental_numa_aware
option fromtf.data.Options
.  Improve the performance of datasets using
from_tensors()
.  Add
tf.linalg.tridiagonal_mul op
.  Add
LinearOperatorToeplitz
.  Added gradient to
tf.linalg.tridiagonal_solve
.  Upgraded LIBXSMM to version 1.11.
parallel_for
: add converters forLogMatrixDeterminant
andMatrixBandPart
. Uniform processing of quantized embeddings by Gather and EmbeddingLookup Ops
 Correct a misstatement in the documentation of the sparse softmax cross entropy logit parameter.
parallel_for
: Add converters forOneHot
,LowerBound
,UpperBound
. Added GPU implementation of
tf.linalg.tridiagonal_matmul
.  Add gradient to
tf.linalg.tridiagonal_matmul
.  Add
tf.ragged.boolean_mask
. tf.switch_case
added, which selects a branch_fn based on a branch_index. The C++ kernel of gather op supports batch dimensions.
 Promoting
unbatch
from experimental to core API.  Fixed default value and documentation for
trainable
arg of tf.Variable.  Adds
tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
.  EagerTensor now supports buffer interface for tensors.
 This change bumps the version number of the FullyConnected Op to 5.
 tensorflow : crash when pointer become nullptr.
parallel_for
: Add converter forMatrixDiag
. Add 'narrow_range' attribute to QuantizeAndDequantizeV2 and V3.
 Added new op:
tf.strings.unsorted_segment_join
.  Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow)
 Adding support for datasets as inputs to
from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.  Add HW acceleration support for topK_v2
 Add new TypeSpec classes
 CloudBigtable version updated to v0.10.0 BEGIN_PUBLIC CloudBigtable version updated to v0.10.0
 Deprecated the use of
constraint=
and.constraint
with ResourceVariable.  Expose Head as public API.
 Update docstring for gather to properly describe the nonempty batch_dims case.
 Added
tf.sparse.from_dense
utility function.  Add
GATHER
support to NN API delegate  Improved ragged tensor support in
TensorFlowTestCase
.  Makes the anormal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.
ResizeInputTensor
now works for all delegates Start of open development of TF, TFLite, XLA MLIR dialects.
 Add
EXPAND_DIMS
support to NN API delegate TEST: expand_dims_test tf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources. Add support of local soft device placement for eager op.
 Pass partial_pivoting to the
_TridiagonalSolveGrad
.  Add HW acceleration support for LogSoftMax
 Added a function nested_value_rowids for ragged tensors.
 fixed a bug in histogram_op.cc.
 Add guard to avoid acceleration of L2 Normalization with input rank != 4
 Added evaluation script for COCO minival
 Add delegate support for
QUANTIZE
 add
tf.math.cumulative_logsumexp
operation.  Add
tf.ragged.stack
.  Add delegate support for
QUANTIZED_16BIT_LSTM
. tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources. Fix memory allocation problem when calling
AddNewInputConstantTensor
.  Delegate application failure leaves interpreter in valid state
 tf.cond, tf.while and if and while in AutoGraph now accept a nonscalar predicate if has a single element. This does not affec nonV2 control flow.
 Enables v2 control flow as part of tf.enable_v2_behavior() and TF2_BEHAVIOR=1.
 Fix potential security vulnerability where decoding variant tensors from proto could result in heap out of bounds memory access.
 Extracts NNAPIDelegateKernel from nnapi_delegate.cc
 Only create a GCS directory object if the object does not already exist.
 Introduce
dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in thecall
method. ResourceVariable
andVariable
no longer acceptsconstraint
in the constructor, nor expose it as a @Property. Add UnifiedGRU as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from 'hard_sigmoid' to 'sigmoid', and 'reset_after' to True in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.
 Begin adding Go wrapper for C Eager API
 XLA HLO graphs can be inspected with interactive_graphviz tool now.
 Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
 Add batch_dims argument to tf.gather.
 Removing of dtype in the constructor of initializers and partition_info in call.
 Add
tf.math.nextafter
op.  Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
tf.linspace(start, stop, num)
now always uses "stop" as last value (for num > 1) Added topk to precision and recall to keras metrics.
 Add a ragged size op and register it to the op dispatcher
 Transitive dependencies on :pooling_ops were removed. Some users may need to add explicit dependencies on :pooling_ops if they reference the operators from that library.
 Add CompositeTensor base class.
 Malformed gif images could result in an access out of bounds in the color palette of the frame. This has been fixed now
 Add templates and interfaces for creating lookup tables
 Tensor::UnsafeCopyFromInternal deprecated in favor Tensor::BitcastFrom
 In
map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.  Add variant wrapper for absl::string_view
 Wraps losses passed to the
compile
API (strings and v1 losses) which are not instances of v2Loss
class inLossWrapper
class. => All losses will now useSUM_OVER_BATCH_SIZE
reduction as default.  Add OpKernels for some stateless maps
 Add v2 APIs for AUCCurve and AUCSummationMethod enums. #tfmetricsconvergence
 Allow nonTensors through v2 losses.
 Add v2 sparse categorical crossentropy metric. GITHUB_PR_OR_BUG=b/123431691
 DType is no longer convertible to an int. Use dtype.as_datatype_enum instead of int(dtype) to get the same result.
 Support both binary and 1/1 label input in v2 hinge and squared hinge losses.
 Added LinearOperator.adjoint and LinearOperator.H (alias).
 Expose CriticalSection in core as
tf.CriticalSection
.  Enhanced graphviz output.
 The behavior of
tf.gather
is now correct when axis=None and batch_dims<0.  Add
tf.linalg.tridiagonal_solve
op.  Add opkernel templates for common table operations.
 Fix issue: Callbacks do not log values in eager mode when a deferred build model is used.
 SignatureDef util functions have been deprecated.
 Update Fingerprint64Map to use aliases
 Add legacy string flat hash map op kernels
 Fix:
model.add_loss(symbolic_tensor)
should work in ambient eager.  Adding
clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager. GITHUB_PR_OR_BUG=b/123431691  Add support for
add_metric
in the graph function mode. GITHUB_PR_OR_BUG=tf_only  Updating cosine similarity loss  removed the negate sign from cosine similarity. GITHUB_PR_OR_BUG=b/123431691
 TF 2.0  Update metric name to always reflect what the user has given in compile. Affects following cases 1. When name is given as 'accuracy'/'crossentropy' 2. When an aliased function name is used eg. 'mse' 3. Removing the
weighted
prefix from weighted metric names.  Workaround for compiler bug(?)
 Changed default for gradient accumulation for TPU embeddings to true.
 Adds summary trace API for collecting graph and profile information.
image.resize
now considers proper pixel centers and has new kernels (incl. antialiasing).
 Bug fix for
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
1e100, a6802739, Abolfazl Shahbazi, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Amit, Amit Srivastava, Andy Craze, Anshuman Tripathy, Armen Poghosov, armenpoghosov, Arpit Shah, Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Bairen Yi, Ben Barsdell, Bhavani Subramanian, Brandon Carter, candy.dc, Chao Liu, Clayne Robison, csukuangfj, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Dave Airlie, David Norman, Dayananda V, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Drew Szurko, Duncan Riach, Fei Hu, Felix Lemke, Filip Matzner, fo40225, frreiss, Gautam, gehring, Grzegorz George Pawelczak, Grzegorz Pawelczak, HanGuo97, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, IHong Jhuo, Ilango R, Innovimax, Jacky Ko, Jakub Lipinski, jcf94, Jeff Poznanovic, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Jonas Rauber, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K. Hodges, kaixih, Karl Lessard, Karl Weinmeister, Kashif Rasul, kjopek, KoanSin Tan, kouml, ktaebum, Laurent Le Brun, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, Mahmoud Abuzaina, manhyuk, Marco Gaido, Marek Drozdowski, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mike Arpaia, nammbash, Natalia Gimelshein, NayanaIbm, neargye, Nehal J Wani, Niels Ole Salscheider, Niranjan Hasabnis, Nutti, olicht, P Sudeepam, Paige Bailey, Palmer Lao, Pariksheet Pinjari, Pavel Samolysov, Pooya Davoodi, Ryan Jiang, Samantha Andow, Sami Kama, Saurabh Deoras, Shahzad Lone, Shashi, Siju, Siju Samuel, SneaseAbq, Spencer Schaber, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Supriya Rao, Taylor Jakobson, Taylor Thornton, ThisIsPIRI, Thomas Deegan, tomguluson92, Tongxuan Liu, Vagif, vcarpani, Vikram Tiwari, Vishwak Srinivasan, VitorAlves, wangsiyu, WeberXie, WeijieSun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Yan Facai (颜发才), ymodak, Yong Tang, Younes Khoudli, Yuan Lin, YvesNoel Weweler, zjjott, 卜居, 王振华 (Wang Zhenhua),
4d55397500, a6802739, Abdullah Selek, abenmao, Adam Richter, Ag Ramesh, Albin Joy, Alex, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, amoitra, Andreas Eberle, Andrew Lihonosov, Anthony Hsu, Anthony Platanios, Anuj Rawat, arp95, Arpit Shah, Astropeak, Augustina Ragwitz, Aurelien Geron, AuréLien Geron, avasid, aweers, Ayush Agrawal, Bas Aarts, Bastian Eichenberger, Bayberry Z, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bin Fan, blairhan, BléNesi Attila, BodinE, Bryan Cutler, Cao Zongyan, Casper Da CostaLuis, Chen Guoyin, chenchc, chengchingwen, chie8842, Christian Hansen, Christoph Boeddeker, Christopher Yeh, Clayne Robison, Coady, Patrick, crafet, ctiijima, Daniel Rasmussen, Daniel Salvadori, David Norman, delock, Denis Khalikov, Deven Desai, Diego Caballero, Donovan Ong, Duncan Dean, Duncan Riach, Dustin Neighly, Dwight J Lyle, Eamon ItoFisher, eashtian3, Edward Forgacs, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Evgeniy Polyakov, Fangjun Kuang, Federico Martinez, Fei Hu, Filip Matzner, FlashTek, fo40225, formath, FrançOis Chollet, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, Geoffrey Irving, George Grzegorz Pawelczak, George Sterpu, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, GyoungYoon Ryoo, haison, Hanton Yang, Haraldur TóMas HallgríMsson, Huan Li (李卓桓), HåKon Sandsmark, IHong, Ilham Firdausi Putra, Imran Salam, Irene Dea, Ivan Habernal, Jacky, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, jer, Jeroen BéDorf, jerryyin, jhalakp, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Jonathan, Joon, Josh Beal, Julian Niedermeier, Junqin Zhang, Justin Dujardin, Justin Tunis, Kaixi Hou, Karthik Muthuraman, Kay Zhu, KbhuteIbm, KDR, Keno Fischer, Kevin Mader, khanhlvg, Kilaru Yasaswi Sri Chandra Gandhi, Koock Yoon, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, LeslieFang, Letian Kang, Li, Guizi, Lukas Folle, Lukas Geiger, luxupu, lvli, Ma, Guokai, Mahmoud Abuzaina, Maksym Kysylov, Mandar Deshpande, Manraj Singh Grover, Margaret MaynardReid, Mark Ryan, Matt Conley, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, Mihail Salnikov, Mikalai Drabovich, Mike Holcomb, minds, monklof, Moses Marin, mpppk, Mr. Metal, MshrH, musikisomorphie, nammbash, Nathan Luehr, Nayana Thorat, Neeraj Pradhan, Neil, Nick, Nick Lewycky, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, Nuka137, ocjosen, omeir1, P Sudeepam, Pan Daoxin, Pariksheet Pinjari, Pasquale Minervini, Patrick J. Lopresti, Patrik Gustavsson, Pavel Akhtyamov, PENGWA, per1234, PeterLee, Phan Van Nguyen Duc, Philipp Jund, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, R S Nikhil Krishna, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, robert, Rohit Gupta, Roland Zimmermann, Roman Soldatow, RonLek, Ruizhe, Ryan Jiang, saishruthi, Saleem Abdulrasool, Sami Kama, SanaDamani, sdamani, Sean Morgan, seanshpark, Sebastien Iooss, ServInc, Severen Redwood, Shashank Gupta, shashvat, Shashvat Chand Shahi, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, sremedios, Srini511, srinivasan.narayanamoorthy, Subin, Sumesh Udayakumaran, Sungmann Cho, sunway513, sxwang, TaeHwan Jung, Taehoon Lee, Takeo Sawada, Taylor Jakobson, Ted Chang, TengLu, terryky, ThisIsIsaac, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Till Hoffmann, Tim Zaman, Tongxuan Liu, Trent Lo, Trevor Morris, TungJerry, Tyorden, Uday Bondhugula, v1incent, Vasileios Lioutas, vbvg2008, Vijay Ravichandran, Viktor Gal, Vincent, Vishnuvardhan Janapati, Vivek Suryamurthy, wangsiyu, wateryzephyr, Wei Wang, WenHeng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xiaoquan Kong, Xin, Xinping Wang, YannYy, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Yuan (Terry) Tang, Yuchen Ying, zhangyujing, zyeric, 王振华 (Zhenhua Wang), 黄鑫
Assets
2
Release 1.13.2
Bug Fixes and Other Changes
 Updates
png_archive
dependency to 1.6.37 to not be affected by CVE20197317, CVE201813785, and CVE201814048.  Updates
sqlite
dependency to 3.28.0 to not be affected by CVE201820506, CVE201820346, and CVE201820505.
Assets
2
Release 1.12.3
Bug Fixes and Other Changes
 Updates
png_archive
dependency to 1.6.37 to not be affected by CVE20197317, CVE201813785, and CVE201814048.  Updates
sqlite
depenency to 3.28.0 to not be affected by CVE201820506, CVE201820346, and CVE201820505.
Assets
2
Release 1.14.0
Major Features and Improvements
 This is the first 1.x release containing the compat.v2 module. This module is required to allow libraries to publish code which works in both 1.x and 2.x. After this release, no backwards incompatible changes are allowed in the 2.0 Python API.
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 NonWindows system libraries are now versioned. This should be a noop for most users as it affects only system package maintainers or those building extensions to TensorFlow:
 Python wheels (Pip packages) contain one library file.
 Linux:
libtensorflow_framework.so.1
 MacOS:
libtensorflow_framework.1.dylib
 Linux:
 Our
libtensorflow
tarball archives contain thelibtensorflow
library and two symlinks. MacOS.dylib
libraries are the same, but match MacOS library naming requirements (i.e.libtensorflow.1.dylib
):libtensorflow.so.1.14.0
, the main librarylibtensorflow.so.1
, symlinked to the main librarylibtensorflow.so
, symlinked to.so.1
 Python wheels (Pip packages) contain one library file.
Behavioral changes
 Set default loss reduction as
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops.AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.  Wraps losses passed to the
compile
API (strings and v1 losses) which are not instances of v2Loss
class inLossWrapper
class. => All losses will now useSUM_OVER_BATCH_SIZE
reduction as default.  Disable
run_eagerly
and distribution strategy if there are symbolic tensors added to the model usingadd_metric
oradd_loss
.  tf.linspace(start, stop, num) now always uses "stop" as last value (for num > 1)
 The behavior of tf.gather is now correct when axis=None and batch_dims<0.
 Only create a GCS directory object if the object does not already exist.
 In
map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.  Bug fix: loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a tf.distribute.Strategy.
 Updating cosine similarity loss  removed the negate sign from cosine similarity.
 DType is no longer convertible to an int. Use dtype.as_datatype_enum instead of int(dtype) to get the same result.
 Changed default for gradient accumulation for TPU embeddings to true.
 Callbacks now log values in eager mode when a deferred build model is used.
 Transitive dependencies on :pooling_ops were removed. Some users may need to add explicit dependencies on :pooling_ops if they reference the operators from that library.
Bug Fixes and Other Changes
 Documentation
 Deprecations and Symbol renames.
 The GPU configuration env parameter
TF_CUDA_HOST_MEM_LIMIT_IN_MB
has been changed toTF_GPU_HOST_MEM_LIMIT_IN_MB
.  Remove unused StringViewVariantWrapper
 Delete unused Fingerprint64Map op registration
 SignatureDef util functions have been deprecated.
 Renamed tf.image functions to remove duplicate "image" where it is redundant.
 tf.keras.experimental.export renamed to tf.keras.experimental.export_saved_model
 Standardize the LayerNormalization API by replacing the args
norm_axis
andparams_axis
withaxis
.  Tensor::UnsafeCopyFromInternal deprecated in favor Tensor::BitcastFrom
 The GPU configuration env parameter
 Keras & Python API
 Add v2 module aliases for:
 tf.initializers => tf.keras.initializers
 tf.losses => tf.keras.losses & tf.metrics => tf.keras.metrics
 tf.optimizers => tf.keras.optimizers
 Add tf.keras.layers.AbstractRNNCell as the preferred implementation of RNN cell for TF v2. User can use it to implement RNN cell with custom behavior.
 Adding
clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.  Add support for passing list of lists to the
metrics
param in Kerascompile
.  Added topk to precision and recall to keras metrics.
 Adding public APIs for
cumsum
andcumprod
keras backend functions.  Fix: model.add_loss(symbolic_tensor) should work in ambient eager.
 Add name argument to tf.string_split and tf.strings_split
 Minor change to SavedModels exported from Keras using tf.keras.experimental.export. (SignatureDef key for evaluation mode is now "eval" instead of "test"). This will be reverted back to "test" in the near future.
 Updates binary cross entropy logic in Keras when input is probabilities. Instead of converting probabilities to logits, we are using the cross entropy formula for probabilities.
 Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.
 Keras training and validation curves are shown on the same plot.
 Introduce
dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in thecall
method.  Removing of dtype in the constructor of initializers and partition_info in call.
 Add v2 module aliases for:
 New ops and improved op functionality
 Add OpKernels for some stateless maps
 Add v2 APIs for AUCCurve and AUCSummationMethod enums. #tfmetricsconvergence
 Add tf.math.nextafter op.
 Add CompositeTensor base class.
 Add tf.linalg.tridiagonal_solve op.
 Add opkernel templates for common table operations.
 Added GPU implementation of tf.linalg.tridiagonal_solve.
 Added support for TFLite in TensorFlow 2.0.
 Adds summary trace API for collecting graph and profile information.
 Add batch_dims argument to tf.gather.
 Add support for
add_metric
in the graph function mode.  Add C++ Gradient for BatchMatMulV2.
 Added tf.random.binomial
 Added gradient for SparseToDense op.
 Add legacy string flat hash map op kernels
 Add a ragged size op and register it to the op dispatcher
 Add broadcasting support to tf.matmul.
 Add ellipsis (...) support for tf.einsum()
 Added LinearOperator.adjoint and LinearOperator.H (alias).
 Added GPU implementation of tf.linalg.tridiagonal_solve.
 Added strings.byte_split
 Add RaggedTensor.placeholder()
 Add a new "result_type" parameter to tf.strings.split
add_update
can now be passed a zeroarg callable in order to support turning off the update when settingtrainable=False
on a Layer of a Model compiled withrun_eagerly=True
. Add variant wrapper for absl::string_view
 Add expand_composites argument to all nest.* methods.
 Add pfor converter for Squeeze.
 Bug fix for tf.tile gradient
 Expose CriticalSection in core as tf.CriticalSection.
 Update Fingerprint64Map to use aliases
 ResourceVariable support for gather_nd.
 ResourceVariable's gather op supports batch dimensions.
 Variadic reduce is supported on CPU
 Extend tf.function with basic support for CompositeTensors arguments (such as SparseTensor and RaggedTensor).
 Add templates and interfaces for creating lookup tables
 Posttraining quantization tool supports quantizing weights shared by multiple operations. The models made with versions of this tool will use INT8 types for weights and will only be executable interpreters from this version onwards.
 Malformed gif images could result in an access out of bounds in the color palette of the frame. This has been fixed now
 image.resize now considers proper pixel centers and has new kernels (incl. antialiasing).
 Performance
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 Support for multihost ncclAllReduce in Distribution Strategy.
 Expose a flag that allows the number of threads to vary across Python benchmarks.
 TensorFlow 2.0 Development
 Add v2 sparse categorical crossentropy metric.
 Allow nonTensors through v2 losses.
 Add UnifiedGRU as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from 'hard_sigmoid' to 'sigmoid', and 'reset_after' to True in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.
 TF 2.0  Update metric name to always reflect what the user has given in compile. Affects following cases 1. When name is given as 'accuracy'/'crossentropy' 2. When an aliased function name is used eg. 'mse' 3. Removing the
weighted
prefix from weighted metric names.  Begin adding Go wrapper for C Eager API
 image.resize in 2.0 now supports gradients for the new resize kernels.
 removed tf.string_split from v2 API
 Expose tf.contrib.proto.* ops in tf.io (they will exist in TF2)
 "Updates the TFLiteConverter API in 2.0. Changes from_concrete_function to from_concrete_functions."
 Enable tf.distribute.experimental.MultiWorkerMirroredStrategy working in eager mode.
 Support both binary and 1/1 label input in v2 hinge and squared hinge losses.
 TensorFlow Lite
 "Adds support for tflite_convert in 2.0."
 "Remove lite.OpHint, lite.experimental, and lite.constant from 2.0 API."
 tf.contrib
 Added Neural Turing Implementation as described in https://arxiv.org/abs/1807.08518.
 Remove tf.contrib.timeseries dependency on TF distributions.
 tf.data
 Add num_parallel_reads and passing in a Dataset containing filenames into TextLineDataset and FixedLengthRecordDataset
 Going forward we operate in TF 2.0, this change is part of the effort to slowly converting XYZDataset to DatasetV2 type which is the official version going to be used in TF 2.0 and motivated by some compatibility issue found, _BigtableXYZDataset (of type DatasetV2) does not implement the _as_variant_tensor() of DatasetV1, when moving contrib.bigtable to tensorflow_io. Converting into DatasetV2 removes the overheads to maintain V1 while we are moving into TF 2.0.
 Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
 Add support for TensorArrays to tf.data Dataset.
 Switching tf.data functions to use
defun
, providing an escape hatch to continue using the legacyDefun
.
 Toolchains
 CUDNN_INSTALL_PATH, TENSORRT_INSTALL_PATH, NCCL_INSTALL_PATH, NCCL_HDR_PATH are deprecated. Use TF_CUDA_PATHS instead which supports a commaseparated list of base paths that are searched to find CUDA libraries and headers.
 TF code now resides in
tensorflow_core
andtensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparent
 XLA
 XLA HLO graphs can be inspected with interactive_graphviz tool now.
 Adds Philox support to new stateful RNG's XLA path.
 Estimator
 Use
tf.compat.v1.estimator.inputs
instead oftf.estimator.inputs
 Replace
contrib
references withtf.estimator.experimental.*
for APIs inearly_stopping.py
 Determining the “correct” value of the
iterations_per_loop
for TPUEstimator or DistributionStrategy continues to be a challenge for our users. We propose dynamically tuning theiterations_per_loop
variable, specifically for using TPUEstimator in training mode, based on a user target TPU execution time. Users might specify a value such as:iterations_per_loop=300s
, which will result in roughly 300 seconds being spent on the TPU between host side operations.
 Use
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
1e100, 4d55397500, a6802739, abenmao, Adam Richter, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Alex, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, Andreas Eberle, Andy Craze, Anthony Hsu, Anthony Platanios, Anuj Rawat, Armen Poghosov, armenpoghosov, arp95, Arpit Shah, Ashwin Ramaswami, Augustina Ragwitz, Aurelien Geron, AuréLien Geron, avasid, aweers, awesomealex1, Ayush Agrawal, Bayberry Z, Ben Barsdell, Bharat Raghunathan, Bhavani Subramanian, Bin Fan, blairhan, BléNesi Attila, BodinE, Brandon Carter, candy.dc, Cheng Chang, Chao Liu, chenchc, chie8842, Christian Hansen, Christian Sigg, Christoph Boeddeker, Clayne Robison, crafet, csukuangfj, ctiijima, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Rasmussen, Daniel Salvadori, Dave Airlie, David Norman, Dayananda V, DayanandaV, delock, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Riach, Dustin Neighly, Edward Forgacs, EFanZh, Evgeniy Polyakov, Fangjun Kuang, Federico Martinez, Fei Hu, Felix Lemke, Filip Matzner, fo40225, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Grzegorz George Pawelczak, Grzegorz Pawelczak, Gurpreet Singh, GyoungYoon Ryoo, HanGuo97, Hanton Yang, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, Huan Li (李卓桓), IHong Jhuo, Ilango R, Innovimax, Irene Dea, Jacky Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeffrey Poznanovic, Jens Elofsson, Jeroen BéDorf, jhalakp, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Joeran Beel, Jonas Rauber, Jonathan, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K Yasaswi Sri Chandra Gandhi, K. Hodges, Kaixi Hou, Karl Lessard, Karl Weinmeister, Karthik Muthuraman, Kashif Rasul, KDR, Keno Fischer, Kevin Mader, Kilaru Yasaswi Sri Chandra Gandhi, kjopek, KoanSin Tan, kouml, ktaebum, Lakshay Tokas, Laurent Le Brun, Letian Kang, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, luxupu, lvli, Ma, Guokai, Mahmoud Abuzaina, Maksym Kysylov, Mandar Deshpande, manhyuk, Marco Gaido, Marek Drozdowski, Mark Collier, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mihail Salnikov, Mikalai Drabovich, Mike Arpaia, Mike Holcomb, monklof, Moses Marin, Mr. Metal, MshrH, nammbash, Natalia Gimelshein, Nathan Luehr, NayanaIbm, neargye, Neeraj Pradhan, Nehal J Wani, Nick, Nick Lewycky, Niels Ole Salscheider, Niranjan Hasabnis, nlewycky, Nuka137, Nutti, olicht, omeir1, P Sudeepam, Palmer Lao, Pan Daoxin, Pariksheet Pinjari, Pasquale Minervini, Pavel Akhtyamov, Pavel Samolysov, PENGWA, Philipp Jund, Pooya Davoodi, Pranav Marathe, R S Nikhil Krishna, Rohit Gupta, Roland Zimmermann, Roman Soldatow, rthadur, Ruizhe, Ryan Jiang, saishruthi, Samantha Andow, Sami Kama, SanaDamani, Saurabh Deoras, sdamani, Sean Morgan, seanshpark, Sebastien Iooss, Sergii Khomenko, ServInc, Shahzad Lone, Shashank Gupta, Shashi, shashvat, Shashvat Chand Shahi, Siju, Siju Samuel, SneaseAbq, Spencer Schaber, sremedios, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Subin, Sumesh Udayakumaran, sunway513, Supriya Rao, sxwang, Takeo Sawada, Taylor Jakobson, Taylor Thornton, Ted Chang, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Tim Zaman, tomguluson92, Tongxuan Liu, Trent Lo, TungJerry, Tyorden, v1incent, Vagif, vcarpani, Vijay Ravichandran, Vikram Tiwari, Viktor Gal, Vincent, Vishnuvardhan Janapati, Vishwak Srinivasan, VitorAlves, wangsiyu, wateryzephyr, WeberXie, WeijieSun, WenHeng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, wyzhao, Xiaoming (Jason) Cui, Xiaoquan Kong, Xin, Yasuhiro Matsumoto, ymodak, Yong Tang, Younes Khoudli, Yuan (Terry) Tang, Yuan Lin, YvesNoel Weweler, Zantares, zhuoryin, zjjott, 卜居, 王振华 (Zhenhua Wang), 黄鑫
Assets
2
Release 2.0.0beta1
Tensorflow 2.0.0beta1 is a minor update to 2.0.0beta0 with a few important bug
fixes. Please refer to 2.0.0beta0 release notes for a complete list of changes in 2.0.0beta0.
Bug Fixes and Other Changes
 Partially fix the function inlining and performance regression for LSTM/GRU.
 Replace training tensor argument with python boolean. Required for TFLite, which does not yet support control flow ops.
 Allow SavedModel serialization to accept
None
InputSpec values.
Assets
2
Release 1.14.0
Major Features and Improvements
 This is the first 1.x release containing the compat.v2 module. This module is required to allow libraries to publish code which works in both 1.x and 2.x. After this release, no backwards incompatible changes are allowed in the 2.0 Python API.
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 NonWindows system libraries are now versioned. This should be a noop for most users as it affects only system package maintainers or those building extensions to TensorFlow:
 Python wheels (Pip packages) contain one library file.
 Linux:
libtensorflow_framework.so.1
 MacOS:
libtensorflow_framework.1.dylib
 Linux:
 Our
libtensorflow
tarball archives contain thelibtensorflow
library and two symlinks. MacOS.dylib
libraries are the same, but match MacOS library naming requirements (i.e.libtensorflow.1.dylib
):libtensorflow.so.1.14.0
, the main librarylibtensorflow.so.1
, symlinked to the main librarylibtensorflow.so
, symlinked to.so.1
Behavioral changes
 Set default loss reduction as
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops.AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.  Wraps losses passed to the
compile
API (strings and v1 losses) which are not instances of v2Loss
class inLossWrapper
class. => All losses will now useSUM_OVER_BATCH_SIZE
reduction as default.  Disable
run_eagerly
and distribution strategy if there are symbolic tensors added to the model usingadd_metric
oradd_loss
.  tf.linspace(start, stop, num) now always uses "stop" as last value (for num > 1)
 The behavior of tf.gather is now correct when axis=None and batch_dims<0.
 Only create a GCS directory object if the object does not already exist.
 In
map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.  Bug fix: loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a tf.distribute.Strategy.
 Updating cosine similarity loss  removed the negate sign from cosine similarity.
 DType is no longer convertible to an int. Use dtype.as_datatype_enum instead of int(dtype) to get the same result.
 Changed default for gradient accumulation for TPU embeddings to true.
 Callbacks now log values in eager mode when a deferred build model is used.
 Transitive dependencies on :pooling_ops were removed. Some users may need to add explicit dependencies on :pooling_ops if they reference the operators from that library.
Bug Fixes and Other Changes
 Documentation
 Deprecations and Symbol renames.
 Remove unused StringViewVariantWrapper
 Delete unused Fingerprint64Map op registration
 SignatureDef util functions have been deprecated.
 Renamed tf.image functions to remove duplicate "image" where it is redundant.
 tf.keras.experimental.export renamed to tf.keras.experimental.export_saved_model
 Standardize the LayerNormalization API by replacing the args
norm_axis
andparams_axis
withaxis
.  Tensor::UnsafeCopyFromInternal deprecated in favor Tensor::BitcastFrom
 Keras & Python API
 Add v2 module aliases for:
 tf.initializers => tf.keras.initializers
 tf.losses => tf.keras.losses & tf.metrics => tf.keras.metrics
 tf.optimizers => tf.keras.optimizers
 Add tf.keras.layers.AbstractRNNCell as the preferred implementation of RNN cell for TF v2. User can use it to implement RNN cell with custom behavior.
 Adding
clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.  Add support for passing list of lists to the
metrics
param in Kerascompile
.  Added topk to precision and recall to keras metrics.
 Adding public APIs for
cumsum
andcumprod
keras backend functions.  Fix: model.add_loss(symbolic_tensor) should work in ambient eager.
 Add name argument to tf.string_split and tf.strings_split
 Minor change to SavedModels exported from Keras using tf.keras.experimental.export. (SignatureDef key for evaluation mode is now "eval" instead of "test"). This will be reverted back to "test" in the near future.
 Updates binary cross entropy logic in Keras when input is probabilities. Instead of converting probabilities to logits, we are using the cross entropy formula for probabilities.
 Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.
 Keras training and validation curves are shown on the same plot.
 Introduce
dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in thecall
method.  Removing of dtype in the constructor of initializers and partition_info in call.
 Add v2 module aliases for:
 New ops and improved op functionality
 Add OpKernels for some stateless maps
 Add v2 APIs for AUCCurve and AUCSummationMethod enums. #tfmetricsconvergence
 Add tf.math.nextafter op.
 Add CompositeTensor base class.
 Add tf.linalg.tridiagonal_solve op.
 Add opkernel templates for common table operations.
 Added support for TFLite in TensorFlow 2.0.
 Adds summary trace API for collecting graph and profile information.
 Add batch_dims argument to tf.gather.
 Add support for
add_metric
in the graph function mode.  Add C++ Gradient for BatchMatMulV2.
 Added tf.random.binomial
 Added gradient for SparseToDense op.
 Add legacy string flat hash map op kernels
 Add a ragged size op and register it to the op dispatcher
 Add broadcasting support to tf.matmul.
 Add ellipsis (...) support for tf.einsum()
 Added LinearOperator.adjoint and LinearOperator.H (alias).
 Added GPU implementation of tf.linalg.tridiagonal_solve.
 Added strings.byte_split
 Add RaggedTensor.placeholder()
 Add a new "result_type" parameter to tf.strings.split
add_update
can now be passed a zeroarg callable in order to support turning off the update when settingtrainable=False
on a Layer of a Model compiled withrun_eagerly=True
. Add variant wrapper for absl::string_view
 Add expand_composites argument to all nest.* methods.
 Add pfor converter for Squeeze.
 Bug fix for tf.tile gradient
 Expose CriticalSection in core as tf.CriticalSection.
 Update Fingerprint64Map to use aliases
 ResourceVariable support for gather_nd.
 ResourceVariable's gather op supports batch dimensions.
 Variadic reduce is supported on CPU
 Extend tf.function with basic support for CompositeTensors arguments (such as SparseTensor and RaggedTensor).
 Add templates and interfaces for creating lookup tables
 Posttraining quantization tool supports quantizing weights shared by multiple operations. The models made with versions of this tool will use INT8 types for weights and will only be executable interpreters from this version onwards.
 Malformed gif images could result in an access out of bounds in the color palette of the frame. This has been fixed now
 image.resize now considers proper pixel centers and has new kernels (incl. antialiasing).
 Performance
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 Support for multihost ncclAllReduce in Distribution Strategy.
 Expose a flag that allows the number of threads to vary across Python benchmarks.
 TensorFlow 2.0 Development
 Add v2 sparse categorical crossentropy metric.
 Allow nonTensors through v2 losses.
 Add UnifiedGRU as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from 'hard_sigmoid' to 'sigmoid', and 'reset_after' to True in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.
 TF 2.0  Update metric name to always reflect what the user has given in compile. Affects following cases 1. When name is given as 'accuracy'/'crossentropy' 2. When an aliased function name is used eg. 'mse' 3. Removing the
weighted
prefix from weighted metric names.  Begin adding Go wrapper for C Eager API
 image.resize in 2.0 now supports gradients for the new resize kernels.
 removed tf.string_split from v2 API
 Expose tf.contrib.proto.* ops in tf.io (they will exist in TF2)
 "Updates the TFLiteConverter API in 2.0. Changes from_concrete_function to from_concrete_functions."
 Enable tf.distribute.experimental.MultiWorkerMirroredStrategy working in eager mode.
 Support both binary and 1/1 label input in v2 hinge and squared hinge losses.
 TensorFlow Lite
 "Adds support for tflite_convert in 2.0."
 "Remove lite.OpHint, lite.experimental, and lite.constant from 2.0 API."
 tf.contrib
 Added Neural Turing Implementation as described in https://arxiv.org/abs/1807.08518.
 Remove tf.contrib.timeseries dependency on TF distributions.
 tf.data
 Add num_parallel_reads and passing in a Dataset containing filenames into TextLineDataset and FixedLengthRecordDataset
 Going forward we operate in TF 2.0, this change is part of the effort to slowly converting XYZDataset to DatasetV2 type which is the official version going to be used in TF 2.0 and motivated by some compatibility issue found, _BigtableXYZDataset (of type DatasetV2) does not implement the _as_variant_tensor() of DatasetV1, when moving contrib.bigtable to tensorflow_io. Converting into DatasetV2 removes the overheads to maintain V1 while we are moving into TF 2.0.
 Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
 Add support for TensorArrays to tf.data Dataset.
 Switching tf.data functions to use
defun
, providing an escape hatch to continue using the legacyDefun
.
 Toolchains
 CUDNN_INSTALL_PATH, TENSORRT_INSTALL_PATH, NCCL_INSTALL_PATH, NCCL_HDR_PATH are deprecated. Use TF_CUDA_PATHS instead which supports a commaseparated list of base paths that are searched to find CUDA libraries and headers.
 TF code now resides in
tensorflow_core
andtensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparent
 XLA
 XLA HLO graphs can be inspected with interactive_graphviz tool now.
 Estimator
 Use
tf.compat.v1.estimator.inputs
instead oftf.estimator.inputs
 Replace
contrib
references withtf.estimator.experimental.*
for APIs inearly_stopping.py
 Determining the “correct” value of the
iterations_per_loop
for TPUEstimator or DistributionStrategy continues to be a challenge for our users. We propose dynamically tuning theiterations_per_loop
variable, specifically for using TPUEstimator in training mode, based on a user target TPU execution time. Users might specify a value such as:iterations_per_loop=300s
, which will result in roughly 300 seconds being spent on the TPU between host side operations.
 Use
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
1e100, 4d55397500, a6802739, abenmao, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Alex, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, Andreas Eberle, Andy Craze, Anthony Platanios, Armen Poghosov, armenpoghosov, arp95, Arpit Shah, Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Ayush Agrawal, Ben Barsdell, Bharat Raghunathan, Bhavani Subramanian, blairhan, BléNesi Attila, Brandon Carter, candy.dc, Chao Liu, chenchc, chie8842, Christian Hansen, Christian Sigg, Clayne Robison, crafet, csukuangfj, ctiijima, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Salvadori, Dave Airlie, David Norman, Dayananda V, DayanandaV, delock, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Riach, Dustin Neighly, Edward Forgacs, EFanZh, Fei Hu, Felix Lemke, Filip Matzner, fo40225, frreiss, Gautam, gehring, Geoffrey Irving, Grzegorz George Pawelczak, Grzegorz Pawelczak, GyoungYoon Ryoo, HanGuo97, Hanton Yang, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, IHong Jhuo, Ilango R, Innovimax, Irene Dea, Jacky Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeffrey Poznanovic, Jens Elofsson, Jeroen BéDorf, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Joeran Beel, Jonas Rauber, Jonathan, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K Yasaswi Sri Chandra Gandhi, K. Hodges, Kaixi Hou, Karl Lessard, Karl Weinmeister, Karthik Muthuraman, Kashif Rasul, KDR, Keno Fischer, Kevin Mader, kjopek, KoanSin Tan, kouml, ktaebum, Lakshay Tokas, Laurent Le Brun, Letian Kang, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, luxupu, Ma, Guokai, Mahmoud Abuzaina, Mandar Deshpande, manhyuk, Marco Gaido, Marek Drozdowski, Mark Collier, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mihail Salnikov, Mike Arpaia, Mike Holcomb, monklof, Moses Marin, MshrH, nammbash, Natalia Gimelshein, NayanaIbm, neargye, Neeraj Pradhan, Nehal J Wani, Nick, Niels Ole Salscheider, Niranjan Hasabnis, nlewycky, Nuka137, Nutti, olicht, P Sudeepam, Palmer Lao, Pan Daoxin, Pariksheet Pinjari, Pavel Samolysov, PENGWA, Pooya Davoodi, R S Nikhil Krishna, Rohit Gupta, Roman Soldatow, rthadur, Ruizhe, Ryan Jiang, Samantha Andow, Sami Kama, SanaDamani, Saurabh Deoras, sdamani, seanshpark, Sebastien Iooss, ServInc, Shahzad Lone, Shashank Gupta, Shashi, shashvat, shashvatshahi1998, Siju, Siju Samuel, SneaseAbq, Spencer Schaber, sremedios, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Sumesh Udayakumaran, Supriya Rao, Taylor Jakobson, Taylor Thornton, Ted Chang, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Tim Zaman, tomguluson92, Tongxuan Liu, TungJerry, v1incent, Vagif, vcarpani, Vikram Tiwari, Vishwak Srinivasan, VitorAlves, wangsiyu, wateryzephyr, WeberXie, WeijieSun, WenHeng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, wyzhao, Xin, Xinan Jiang, Yasuhiro Matsumoto, ymodak, Yong Tang, Younes Khoudli, Yuan Lin, YvesNoel Weweler, Zantares, zjjott, 卜居, 王振华 (Wang Zhenhua), 黄鑫
Assets
2
tensorflowjenkins released this
Release 1.14.0
Major Features and Improvements
 This is the first 1.x release containing the compat.v2 module. This module is required to allow libraries to publish code which works in both 1.x and 2.x. After this release, no backwards incompatible changes are allowed in the 2.0 Python API.
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
Behavioral changes
 Set default loss reduction as
AUTO
for improving reliability of loss scaling with distribution strategy and custom training loops.AUTO
indicates that the reduction option will be determined by the usage context. For almost all cases this defaults toSUM_OVER_BATCH_SIZE
. When used in distribution strategy scope, outside of builtin training loops such astf.keras
compile
andfit
, we expect reduction value to be 'None' or 'SUM'. Using other values will raise an error.  Wraps losses passed to the
compile
API (strings and v1 losses) which are not instances of v2Loss
class inLossWrapper
class. => All losses will now useSUM_OVER_BATCH_SIZE
reduction as default.  Disable
run_eagerly
and distribution strategy if there are symbolic tensors added to the model usingadd_metric
oradd_loss
.  tf.linspace(start, stop, num) now always uses "stop" as last value (for num > 1)
ResourceVariable
andVariable
no longer acceptsconstraint
in the constructor, nor expose it as a @Property. The behavior of tf.gather is now correct when axis=None and batch_dims<0.
 Only create a GCS directory object if the object does not already exist.
 In
map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.  Bug fix: loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a tf.distribute.Strategy.
 Updating cosine similarity loss  removed the negate sign from cosine similarity.
 DType is no longer convertible to an int. Use dtype.as_datatype_enum instead of int(dtype) to get the same result.
 Changed default for gradient accumulation for TPU embeddings to true.
 Callbacks now log values in eager mode when a deferred build model is used.
 Transitive dependencies on :pooling_ops were removed. Some users may need to add explicit dependencies on :pooling_ops if they reference the operators from that library.
Bug Fixes and Other Changes
 Documentation
 Deprecations and Symbol renames.
 Remove unused StringViewVariantWrapper
 Delete unused Fingerprint64Map op registration
 SignatureDef util functions have been deprecated.
 Renamed tf.image functions to remove duplicate "image" where it is redundant.
 tf.keras.experimental.export renamed to tf.keras.experimental.export_saved_model
 Standardize the LayerNormalization API by replacing the args
norm_axis
andparams_axis
withaxis
.  Tensor::UnsafeCopyFromInternal deprecated in favor Tensor::BitcastFrom
 Keras & Python API
 Add v2 module aliases for:
 tf.initializers => tf.keras.initializers
 tf.losses => tf.keras.losses & tf.metrics => tf.keras.metrics
 tf.optimizers => tf.keras.optimizers
 Add tf.keras.layers.AbstractRNNCell as the preferred implementation of RNN cell for TF v2. User can use it to implement RNN cell with custom behavior.
 Adding
clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.  Add support for passing list of lists to the
metrics
param in Kerascompile
.  Added topk to precision and recall to keras metrics.
 Adding public APIs for
cumsum
andcumprod
keras backend functions.  Fix: model.add_loss(symbolic_tensor) should work in ambient eager.
 Add name argument to tf.string_split and tf.strings_split
 Minor change to SavedModels exported from Keras using tf.keras.experimental.export. (SignatureDef key for evaluation mode is now "eval" instead of "test"). This will be reverted back to "test" in the near future.
 Updates binary cross entropy logic in Keras when input is probabilities. Instead of converting probabilities to logits, we are using the cross entropy formula for probabilities.
 Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.
 Keras training and validation curves are shown on the same plot.
 Introduce
dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in thecall
method.  Removing of dtype in the constructor of initializers and partition_info in call.
 Add v2 module aliases for:
 New ops and improved op functionality
 Add OpKernels for some stateless maps
 Add v2 APIs for AUCCurve and AUCSummationMethod enums. #tfmetricsconvergence
 Add tf.math.nextafter op.
 Add CompositeTensor base class.
 Add tf.linalg.tridiagonal_solve op.
 Add opkernel templates for common table operations.
 Added support for TFLite in TensorFlow 2.0.
 Adds summary trace API for collecting graph and profile information.
 Add batch_dims argument to tf.gather.
 Add support for
add_metric
in the graph function mode.  Add C++ Gradient for BatchMatMulV2.
 Added tf.random.binomial
 Added gradient for SparseToDense op.
 Add legacy string flat hash map op kernels
 Add a ragged size op and register it to the op dispatcher
 Add broadcasting support to tf.matmul.
 Add ellipsis (...) support for tf.einsum()
 Added LinearOperator.adjoint and LinearOperator.H (alias).
 Added GPU implementation of tf.linalg.tridiagonal_solve.
 Added strings.byte_split
 Add RaggedTensor.placeholder()
 Add a new "result_type" parameter to tf.strings.split
add_update
can now be passed a zeroarg callable in order to support turning off the update when settingtrainable=False
on a Layer of a Model compiled withrun_eagerly=True
. Add variant wrapper for absl::string_view
 Add expand_composites argument to all nest.* methods.
 Add pfor converter for Squeeze.
 Bug fix for tf.tile gradient
 Expose CriticalSection in core as tf.CriticalSection.
 Update Fingerprint64Map to use aliases
 ResourceVariable support for gather_nd.
 ResourceVariable's gather op supports batch dimensions.
 Variadic reduce is supported on CPU
 Extend tf.function with basic support for CompositeTensors arguments (such as SparseTensor and RaggedTensor).
 Add templates and interfaces for creating lookup tables
 Posttraining quantization tool supports quantizing weights shared by multiple operations. The models made with versions of this tool will use INT8 types for weights and will only be executable interpreters from this version onwards.
 Malformed gif images could result in an access out of bounds in the color palette of the frame. This has been fixed now
 image.resize now considers proper pixel centers and has new kernels (incl. antialiasing).
 Performance
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 Support for multihost ncclAllReduce in Distribution Strategy.
 Expose a flag that allows the number of threads to vary across Python benchmarks.
 TensorFlow 2.0 Development
 Add v2 sparse categorical crossentropy metric.
 Allow nonTensors through v2 losses.
 Add UnifiedGRU as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from 'hard_sigmoid' to 'sigmoid', and 'reset_after' to True in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.
 TF 2.0  Update metric name to always reflect what the user has given in compile. Affects following cases 1. When name is given as 'accuracy'/'crossentropy' 2. When an aliased function name is used eg. 'mse' 3. Removing the
weighted
prefix from weighted metric names.  Begin adding Go wrapper for C Eager API
 image.resize in 2.0 now supports gradients for the new resize kernels.
 removed tf.string_split from v2 API
 Expose tf.contrib.proto.* ops in tf.io (they will exist in TF2)
 "Updates the TFLiteConverter API in 2.0. Changes from_concrete_function to from_concrete_functions."
 Enable tf.distribute.experimental.MultiWorkerMirroredStrategy working in eager mode.
 Support both binary and 1/1 label input in v2 hinge and squared hinge losses.
 TensorFlow Lite
 "Adds support for tflite_convert in 2.0."
 "Remove lite.OpHint, lite.experimental, and lite.constant from 2.0 API."
 tf.contrib
 Added Neural Turing Implementation as described in https://arxiv.org/abs/1807.08518.
 Remove tf.contrib.timeseries dependency on TF distributions.
 tf.data
 Add num_parallel_reads and passing in a Dataset containing filenames into TextLineDataset and FixedLengthRecordDataset
 Going forward we operate in TF 2.0, this change is part of the effort to slowly converting XYZDataset to DatasetV2 type which is the official version going to be used in TF 2.0 and motivated by some compatibility issue found, _BigtableXYZDataset (of type DatasetV2) does not implement the _as_variant_tensor() of DatasetV1, when moving contrib.bigtable to tensorflow_io. Converting into DatasetV2 removes the overheads to maintain V1 while we are moving into TF 2.0.
 Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
 Add support for TensorArrays to tf.data Dataset.
 Switching tf.data functions to use
defun
, providing an escape hatch to continue using the legacyDefun
.
 Toolchains
 CUDNN_INSTALL_PATH, TENSORRT_INSTALL_PATH, NCCL_INSTALL_PATH, NCCL_HDR_PATH are deprecated. Use TF_CUDA_PATHS instead which supports a commaseparated list of base paths that are searched to find CUDA libraries and headers.
 TF code now resides in
tensorflow_core
andtensorflow
is just a virtual pip package. No code changes are needed for projects using TensorFlow, the change is transparent
 XLA
 XLA HLO graphs can be inspected with interactive_graphviz tool now.
 Estimator
 Use tf.compat.v1.estimator.inputs instead of tf.estimator.inputs
 Replace contrib references with tf.estimator.experimental.* for apis in early_stopping.py
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
1e100, 4d55397500, a6802739, abenmao, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Alex, Aman Patel, Amit, Amit Kumar Jaiswal, Amit Srivastava, Andreas Eberle, Andy Craze, Anthony Platanios, Armen Poghosov, armenpoghosov, arp95, Arpit Shah, Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Ayush Agrawal, Ben Barsdell, Bharat Raghunathan, Bhavani Subramanian, blairhan, BléNesi Attila, Brandon Carter, candy.dc, Chao Liu, chenchc, chie8842, Christian Hansen, Christian Sigg, Clayne Robison, crafet, csukuangfj, ctiijima, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Daniel Salvadori, Dave Airlie, David Norman, Dayananda V, DayanandaV, delock, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Donovan Ong, Drew Szurko, Duncan Riach, Dustin Neighly, Edward Forgacs, EFanZh, Fei Hu, Felix Lemke, Filip Matzner, fo40225, frreiss, Gautam, gehring, Geoffrey Irving, Grzegorz George Pawelczak, Grzegorz Pawelczak, GyoungYoon Ryoo, HanGuo97, Hanton Yang, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, IHong Jhuo, Ilango R, Innovimax, Irene Dea, Jacky Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeffrey Poznanovic, Jens Elofsson, Jeroen BéDorf, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Joeran Beel, Jonas Rauber, Jonathan, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K Yasaswi Sri Chandra Gandhi, K. Hodges, Kaixi Hou, Karl Lessard, Karl Weinmeister, Karthik Muthuraman, Kashif Rasul, KDR, Keno Fischer, Kevin Mader, kjopek, KoanSin Tan, kouml, ktaebum, Lakshay Tokas, Laurent Le Brun, Letian Kang, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, luxupu, Ma, Guokai, Mahmoud Abuzaina, Mandar Deshpande, manhyuk, Marco Gaido, Marek Drozdowski, Mark Collier, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mihail Salnikov, Mike Arpaia, Mike Holcomb, monklof, Moses Marin, MshrH, nammbash, Natalia Gimelshein, NayanaIbm, neargye, Neeraj Pradhan, Nehal J Wani, Nick, Niels Ole Salscheider, Niranjan Hasabnis, nlewycky, Nuka137, Nutti, olicht, P Sudeepam, Palmer Lao, Pan Daoxin, Pariksheet Pinjari, Pavel Samolysov, PENGWA, Pooya Davoodi, R S Nikhil Krishna, Rohit Gupta, Roman Soldatow, rthadur, Ruizhe, Ryan Jiang, Samantha Andow, Sami Kama, SanaDamani, Saurabh Deoras, sdamani, seanshpark, Sebastien Iooss, ServInc, Shahzad Lone, Shashank Gupta, Shashi, shashvat, shashvatshahi1998, Siju, Siju Samuel, SneaseAbq, Spencer Schaber, sremedios, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Sumesh Udayakumaran, Supriya Rao, Taylor Jakobson, Taylor Thornton, Ted Chang, ThisIsPIRI, Thomas Deegan, Thomas Hagebols, tianyapiaozi, Tim Zaman, tomguluson92, Tongxuan Liu, TungJerry, v1incent, Vagif, vcarpani, Vikram Tiwari, Vishwak Srinivasan, VitorAlves, wangsiyu, wateryzephyr, WeberXie, WeijieSun, WenHeng (Jack) Chung, wenxizhu, Will Battel, William D. Irons, wyzhao, Xin, Yasuhiro Matsumoto, ymodak, Yong Tang, Younes Khoudli, Yuan Lin, YvesNoel Weweler, Zantares, zjjott, 卜居, 王振华 (Wang Zhenhua), 黄鑫
Assets
2
Release 1.12.2
Bug Fixes and Other Changes
 Fixes a potential security vulnerability where carefully crafted GIF images can produce a null pointer dereference during decoding
Assets
2
Release 2.0.0alpha0
Major Features and Improvements
TensorFlow 2.0 focuses on simplicity and ease of use, featuring updates like:
 Easy model building with Keras and eager execution.
 Robust model deployment in production on any platform.
 Powerful experimentation for research.
 API simplification by reducing duplication removing deprecated endpoints.
For information on upgrading your existing TensorFlow 1.x models, please refer to our Upgrade and Migration guides.
We have also released a collection of tutorials and getting started guides, and an Effective Style Guide for TF 2.0.
For more information on these communitydriven changes, be sure to check out the RFCs we have on Github. If you care about details, all of the RFCs are important.
Refer to our public project status tracker and issues tagged with 2.0
on GitHub for insight into recent issues and development progress.
And, of course: we would love to have your feedback! If you experience any snags when using TF 2.0, be sure to let us know at the TF 2.0 Testing User Group. We have a support mailing list as well as weekly testing meetings, and would love to hear your migration feedback and questions.
Some highlights:
 API cleanup, included removing
tf.app
,tf.flags
, andtf.logging
in favor of abslpy.  No more global variables with helper methods like
tf.global_variables_initializer
andtf.get_global_step
.  Functions, not sessions (
tf.Session
andsession.run
>tf.function
).  Added support for TensorFlow Lite in TensorFlow 2.0.
Breaking Changes
tf.contrib
has been deprecated, and functionality has been either migrated to the core TensorFlow API, totensorflow/addons
, or removed entirely. Checkpoint breakage for RNNs and for Optimizers.
Bug Fixes and Other Changes
tf.estimator
 Use
tf.compat.v1.estimator.inputs
instead oftf.estimator.inputs
in Estimator.  Replace contrib references with
tf.estimator.experimental.*
for apis inearly_stopping.py
in Estimator.
 Use
 Keras & Python API
 Added topk to precision and recall to keras metrics.
 Adding public APIs for
cumsum
andcumprod
keras backend functions.  Minor change to SavedModels exported from Keras using
tf.keras.experimental.export
. (SignatureDef key for evaluation mode is now "eval" instead of "test"). This will be reverted back to "test" in the near future.  Add v2 module aliases for losses and metrics:
tf.losses = tf.keras.losses
&tf.metrics = tf.keras.metrics
 Add v2 module aliases for optimizers:
tf.optimizers = tf.keras.optimizers
tf.keras.experimental.export
renamed totf.keras.experimental.export_saved_model
 Add v2 module aliases for initializers: tf.initializers = tf.keras.initializers
 Add
tf.keras.layers.AbstractRNNCell
as the preferred implementation of RNN cell for TF v2. User can use it to implement RNN cell with custom behavior.  Keras training and validation curves are shown on the same plot.
 Disable
run_eagerly
and distribution strategy if there are symbolic tensors added to the model usingadd_metric
oradd_loss
.
 Other:
 Only create a GCS directory object if the object does not already exist.
 Introduce
dynamic
constructor argument in Layer and Model, which should be set to True when using imperative control flow in thecall
method. ResourceVariable
andVariable
no longer acceptsconstraint
in the constructor, nor expose it as a @Property. `ResourceVariab...
 Add UnifiedGRU as the new GRU implementation for tf2.0. Change the default recurrent activation function for GRU from 'hard_sigmoid' to 'sigmoid', and 'reset_after' to True in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default GRU will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with GRU(recurrent_activation='hard_sigmoid', reset_after=False) to fallback to 1.x behavior.
 Begin adding Go wrapper for C Eager API
 XLA HLO graphs can be inspected with interactive_graphviz tool now.
 Add dataset ops to the graph (or create kernels in Eager execution) during the python Dataset object creation instead doing it during Iterator creation time.
 Add batch_dims argument to tf.gather.
 Removing of dtype in the constructor of initializers and partition_info in call.
 Add
tf.math.nextafter
op.  Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
 Turn on MKLDNN contraction kernels by default. MKLDNN dynamically dispatches the best kernel implementation based on CPU vector architecture. To disable them, build with define=tensorflow_mkldnn_contraction_kernel=0.
tf.linspace(start, stop, num)
now always uses "stop" as last value (for num > 1) Raw TensorFlow functions can now be used in conjunction with the Keras Functional API during model creation. This obviates the need for users to create Lambda layers in most cases when using the Functional API. Like Lambda layers, TensorFlow functions that result in Variable creation or assign ops are not supported.
 Add a ragged size op and register it to the op dispatcher
 Transitive dependencies on :pooling_ops were removed. Some users may need to add explicit dependencies on :pooling_ops if they reference the operators from that library.
 Updates binary cross entropy logic in Keras when input is probabilities. Instead of converting probabilities to logits, we are using the cross entropy formula for probabilities.
 Add CompositeTensor base class.
 Malformed gif images could result in an access out of bounds in the color palette of the frame. This has been fixed now
 Add templates and interfaces for creating lookup tables
Tensor::UnsafeCopyFromInternal
deprecated in favorTensor::BitcastFrom
. In
map_vectorization
optimization, reduce the degree of parallelism in the vectorized map node.  Add variant wrapper for
absl::string_view
.  Posttraining quantization tool supports quantizing weights shared by multiple operations. The models made with versions of this tool will use INT8 types for weights and will only be executable interpreters from this version onwards.
 Wraps losses passed to the
compile
API (strings and v1 losses) which are not instances of v2Loss
class inLossWrapper
class. => All losses will now useSUM_OVER_BATCH_SIZE
reduction as default.  Add OpKernels for some stateless maps
 Add v2 APIs for AUCCurve and AUCSummationMethod enums.
 Add v2 APIs for AUCCurve and AUCSummationMethod enums.
 Allow nonTensors through v2 losses.
 Add v2 sparse categorical crossentropy metric.
DType
is no longer convertible to an int. Usedtype.as_datatype_enum
instead ofint(dtype)
to get the same result. Support both binary and 1/1 label input in v2 hinge and squared hinge losses.
 Bug fix: loss and gradients should now more reliably be correctly scaled w.r.t. the global batch size when using a tf.distribute.Strategy.
 Added LinearOperator.adjoint and LinearOperator.H (alias).
 Switching
tf.data functions
to usedefun
, providing an escape hatch to continue using the legacyDefun
.  Expose CriticalSection in core as
tf.CriticalSection
.  Enhanced graphviz output.
 The behavior of
tf.gather
is now correct when axis=None and batch_dims<0.  Add
tf.linalg.tridiagonal_solve
op.  Add opkernel templates for common table operations.
 Fix callbacks do not log values in eager mode when a deferred build model is used.
 SignatureDef util functions have been deprecated.
 Update Fingerprint64Map to use aliases
 Add legacy string flat hash map op kernels
 Add support for passing list of lists to the
metrics
param in Keras `compile.  Keras training and validation curves are shown on the same plot.
 Fix:
model.add_loss(symbolic_tensor)
should work in ambient eager.  Adding
clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.  Add support for
add_metric
in the graph function mode.  Adding
clear_losses
API to be able to clear losses at the end of forward pass in a custom training loop in eager.  Updating cosine similarity loss  removed the negate sign from cosine similarity.
 TF 2.0  Update metric name to always reflect what the user has given in compile. Affects following cases 1. When name is given as 'accuracy'/'crossentropy' 2. When an aliased function name is used eg. 'mse' 3. Removing the
weighted
prefix from weighted metric names.  Workaround for compiler bug.
 Changed default for gradient accumulation for TPU embeddings to true.
 Adds summary trace API for collecting graph and profile information.
 Support for multihost
ncclAllReduce
in Distribution Strategy.  Enable
tf.distribute.experimental.MultiWorkerMirroredStrategy
working in eager mode. image.resize
now considers proper pixel centers and has new kernels (incl. antialiasing).
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
1e100, a6802739, Abolfazl Shahbazi, Adam Weiss, Ag Ramesh, Alan Du, Albin Joy, Amit, Amit Srivastava, Andy Craze, Anshuman Tripathy, Armen Poghosov, armenpoghosov, Arpit Shah, Ashwin Ramaswami, Aurelien Geron, AuréLien Geron, aweers, awesomealex1, Bairen Yi, Ben Barsdell, Bhavani Subramanian, Brandon Carter, candy.dc, Chao Liu, Clayne Robison, csukuangfj, Dan Jarvis, Dan Lazewatsky, Daniel Ingram, Dave Airlie, David Norman, Dayananda V, Denis Khalikov, Deven Desai, Dheeraj Rajaram Reddy, dmitrievanthony, Drew Szurko, Duncan Riach, Fei Hu, Felix Lemke, Filip Matzner, fo40225, frreiss, Gautam, gehring, Grzegorz George Pawelczak, Grzegorz Pawelczak, HanGuo97, Hari Shankar, hehongliang, Heungsub Lee, Hoeseong Kim, IHong Jhuo, Ilango R, Innovimax, Jacky Ko, Jakub Lipinski, Jason Zaman, jcf94, Jeff Poznanovic, Jia Qingtong, Jiankang, Joe Q, Joe Quadrino, Jonas Rauber, Jonathan Kyl, Joppe Geluykens, Joseph Friedman, jtressle, jwu, K. Hodges, kaixih, Karl Lessard, Karl Weinmeister, Kashif Rasul, kjopek, KoanSin Tan, kouml, ktaebum, Laurent Le Brun, Li, Guizi, Loo Rong Jie, Lucas Hendren, Lukas Geiger, Luke Han, Mahmoud Abuzaina, manhyuk, Marco Gaido, Marek Drozdowski, Mark Ryan, mars20, Mateusz Chudyk, Matt Conley, MattConley, mbhuiyan, mdfaijul, Melissa Grueter, Michael KäUfl, MickaëL Schoentgen, Miguel Morin, Mike Arpaia, nammbash, Natalia Gimelshein, NayanaIbm, neargye, Nehal J Wani, Niels Ole Salscheider, Niranjan Hasabnis, Nutti, olicht, P Sudeepam, Paige Bailey, Palmer Lao, Pariksheet Pinjari, Pavel Samolysov, Pooya Davoodi, Ryan Jiang, Samantha Andow, Sami Kama, Saurabh Deoras, Shahzad Lone, Shashi, Siju, Siju Samuel, SneaseAbq, Spencer Schaber, srinivasan.narayanamoorthy, Steve Lang, Steve Nesae, Supriya Rao, Taylor Jakobson, Taylor Thornton, ThisIsPIRI, Thomas Deegan, tomguluson92, Tongxuan Liu, Vagif, vcarpani, Vikram Tiwari, Vishwak Srinivasan, VitorAlves, wangsiyu, WeberXie, WeijieSun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Yan Facai (颜发才), ymodak, Yong Tang, Younes Khoudli, Yuan Lin, YvesNoel Weweler, zjjott, 卜居, 王振华 (Wang Zhenhua)
Assets
2
Release 1.13.1
Major Features and Improvements
 TensorFlow Lite has moved from contrib to core. This means that Python modules are under
tf.lite
and source code is now undertensorflow/lite
rather thantensorflow/contrib/lite
.  TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
 Support for Python3.7 on all operating systems.
 Moved NCCL to core.
Behavioral changes
 Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in
tf.constant
.  Make the
gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
,convolutional_orthogonal_1D
,convolutional_orthogonal_2D
,convolutional_orthogonal_3D
) have consistent behavior with thetf.initializers.orthogonal
initializer, i.e. scale the output l2norm bygain
and NOT bysqrt(gain)
. (Note that these functions are currently intf.contrib
which is not guaranteed backward compatible).
Bug Fixes and Other Changes
 Documentation
 Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
 Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.
 Deprecations and Symbol renames.
 Removing deprecations for the following endpoints:
tf.acos
,tf.acosh
,tf.add
,tf.as_string
,tf.asin
,tf.asinh
,tf.atan
,tf.atan2
,tf.atanh
,tf.cos
,tf.cosh
,tf.equal
,tf.exp
,tf.floor
,tf.greater
,tf.greater_equal
,tf.less
,tf.less_equal
,tf.log
,tf.logp1
,tf.logical_and
,tf.logical_not
,tf.logical_or
,tf.maximum
,tf.minimum
,tf.not_equal
,tf.sin
,tf.sinh
,tf.tan
 Deprecate
tf.data.Dataset.shard
.  Deprecate
saved_model.loader.load
which is replaced bysaved_model.load
andsaved_model.main_op
, which will be replaced bysaved_model.main_op
in V2.  Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
 Update sklearn imports for deprecated packages.
 Deprecate
Variable.count_up_to
andtf.count_up_to
in favor ofDataset.range
.  Export
confusion_matrix
op astf.math.confusion_matrix
instead oftf.train.confusion_matrix
.  Add
tf.dtypes.
endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints intf.sysconfig.
andtf.version.
; moving all constants undertf.saved_model
submodules totf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.  Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.
 Removing deprecations for the following endpoints:
 Keras & Python API
 Add to Keras functionality analogous to
tf.register_tensor_conversion_function
.  Subclassed Keras models can now be saved through
tf.contrib.saved_model.save_keras_model
. LinearOperator.matmul
now returns a newLinearOperator
.
 Add to Keras functionality analogous to
 New ops and improved op functionality
 Add a Nearest Neighbor Resize op.
 Add an
ignore_unknown
argument toparse_values
which suppresses ValueError for unknown hyperparameter types. Such * Addtf.linalg.matvec
convenience function. tf.einsum()
raisesValueError
for unsupported equations like"ii>"
. Add DCTI and IDCTI in
tf.signal.dct
andtf.signal.idct
.  Add LU decomposition op.
 Add quantile loss to gradient boosted trees in estimator.
 Add
round_mode
toQuantizeAndDequantizeV2
op to select rounding algorithm.  Add
unicode_encode
,unicode_decode
,unicode_decode_with_offsets
,unicode_split
,unicode_split_with_offset
, andunicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF8, UTF16BE, UTF32BE)  Add "unit" attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
 Broadcasting support for Ragged Tensors.
SpaceToDepth
supports uint8 data type. Support multilabel quantile regression in estimator.
 We now use "div" as the default partition_strategy in
tf.nn.safe_embedding_lookup_sparse
,tf.nn.sampled_softmax
andtf.nn.nce_loss
.
hyperparameter are ignored.
 Performance
 Improve performance of GPU cumsum/cumprod by up to 300x.
 Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.
 TensorFlow 2.0 Development
 Add a command line tool to convert to TF2.0, tf_upgrade_v2
 Merge
tf.spectral
intotf.signal
for TensorFlow 2.0.  Change the default recurrent activation function for LSTM from 'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with LSTM(recurrent_activation='hard_sigmoid') to fallback to 1.x behavior.
 TensorFlow Lite
 Move from
tensorflow/contrib/lite
totensorflow/lite
.  Add experimental Java API for injecting TensorFlow Lite delegates
 Add support for strings in TensorFlow Lite Java API.
 Move from
tf.contrib
: Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
 Dropout now takes
rate
argument,keep_prob
is deprecated.  Estimator occurrences references
tf.contrib.estimator
were changed totf.estimator
:tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.make_stop_at_checkpoint_step_hook
.
 Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
 Migrate linear optimizer from contrib to core.
 Move
tf.contrib.signal
totf.signal
(preserving aliases in tf.contrib.signal).  Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.
 tf.data:
 Add
tf.data.experimental.StatsOptions()
, to configure options to collect statistics fromtf.data.Dataset
pipeline usingStatsAggregator
. Add nested option,experimental_stats
(which takes atf.data.experimen tal.StatsOptions
object), totf.data.Options
. Deprecatestf.data.experimental.set_stats_agregator
.  Performance optimizations:
 Add
tf.data.experimental.OptimizationOptions()
, to configure options to enabletf.data
performance optimizations. Add nested option,experimental_optimization
(which takes atf.data.experimental.OptimizationOptions
object), totf.data.Options
. Remove performance optimization options fromtf.data.Options
, and add them undertf.data.experimental.OptimizationOptions
instead.  Enable
map_and_batch_fusion
andnoop_elimination
optimizations by default. They can be disabled by configuringtf.data.experimental.OptimizationOptions
to setmap_and_batch = False
ornoop_elimination = False
respectively. To disable all default optimizations, setapply_default_optimizations = False
.  Support parallel map in
map_and_filter_fusion
.  Disable static optimizations for input pipelines that use nonresource
tf.Variable
s.
 Add
 Add NUMAaware MapAndBatch dataset.
 Deprecate
tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.  Deprecate
tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and addedtf.compat.v1.data.make_initializable_iterator()
.  Enable nested dataset support in core
tf.data
transformations.  For
tf.data.Dataset
implementers: Addedtf.data.Dataset._element_structured property
to replaceDataset.output_{types,shapes,classes}
.  Make
num_parallel_calls
oftf.data.Dataset.interleave
andtf.data.Dataset.map
work in Eager mode.
 Add
 Toolchains
 Fixed OpenSSL compatibility by avoiding
EVP_MD_CTX_destroy
.  Added bounds checking to printing deprecation warnings.
 Upgraded CUDA dependency to 10.0
 To build with Android NDK r14b, add "#include <linux/compiler.h>" to androidndkr14b/platforms/android14/arch*/usr/include/linux/futex.h
 Removed
:android_tensorflow_lib_selective_registration*
targets, use:android_tensorflow_lib_lite*
targets instead.
 Fixed OpenSSL compatibility by avoiding
 XLA
 Move
RoundToEven
function to xla/client/lib/math.h.  A new environment variable
TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.  Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
 XLA HLO graphs can now be rendered as SVG/HTML.
 Move
 Estimator
 Replace all occurences of
tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
 Replace all occurences of
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
 Replace all occurrences of
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
 Replace all occurrences of
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
 Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.  Update
regression_head
to the new Head API for Canned Estimator V2.  Switch
multi_class_head
to Head API for Canned Estimator V2.  Replace all occurences of
tf.contrib.estimator.InMemoryEvaluatorHook
andtf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.InMemoryEvaluatorHook
andtf.estimator.experimental.make_stop_at_checkpoint_step_hook
 Migrate linear optimizer from contrib to core.
 Replace all occurences of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, AvijitNervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
Assets
2
Release 1.13.0 RC2
Major Features and Improvements
 TensorFlow Lite has moved from contrib to core. This means that Python modules are under
tf.lite
and source code is now undertensorflow/lite
rather thantensorflow/contrib/lite
.  TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
 Support for Python3.7 on all operating systems.
 Moved NCCL to core.
Behavioral changes
 Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in
tf.constant
.  Make the
gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
,convolutional_orthogonal_1D
,convolutional_orthogonal_2D
,convolutional_orthogonal_3D
) have consistent behavior with thetf.initializers.orthogonal
initializer, i.e. scale the output l2norm bygain
and NOT bysqrt(gain)
. (Note that these functions are currently intf.contrib
which is not guaranteed backward compatible).
Bug Fixes and Other Changes
 Documentation
 Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
 Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.
 Deprecations and Symbol renames.
 Removing deprecations for the following endpoints:
tf.acos
,tf.acosh
,tf.add
,tf.as_string
,tf.asin
,tf.asinh
,tf.atan
,tf.atan2
,tf.atanh
,tf.cos
,tf.cosh
,tf.equal
,tf.exp
,tf.floor
,tf.greater
,tf.greater_equal
,tf.less
,tf.less_equal
,tf.log
,tf.logp1
,tf.logical_and
,tf.logical_not
,tf.logical_or
,tf.maximum
,tf.minimum
,tf.not_equal
,tf.sin
,tf.sinh
,tf.tan
 Deprecate
tf.data.Dataset.shard
.  Deprecate
saved_model.loader.load
which is replaced bysaved_model.load
andsaved_model.main_op
, which will be replaced bysaved_model.main_op
in V2.  Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
 Update sklearn imports for deprecated packages.
 Deprecate
Variable.count_up_to
andtf.count_up_to
in favor ofDataset.range
.  Export
confusion_matrix
op astf.math.confusion_matrix
instead oftf.train.confusion_matrix
.  Add
tf.dtypes.
endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints intf.sysconfig.
andtf.version.
; moving all constants undertf.saved_model
submodules totf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.  Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.
 Removing deprecations for the following endpoints:
 Keras & Python API
 Add to Keras functionality analogous to
tf.register_tensor_conversion_function
.  Subclassed Keras models can now be saved through
tf.contrib.saved_model.save_keras_model
. LinearOperator.matmul
now returns a newLinearOperator
.
 Add to Keras functionality analogous to
 New ops and improved op functionality
 Add a Nearest Neighbor Resize op.
 Add an
ignore_unknown
argument toparse_values
which suppresses ValueError for unknown hyperparameter types. Such * Addtf.linalg.matvec
convenience function. tf.einsum()
raisesValueError
for unsupported equations like"ii>"
. Add DCTI and IDCTI in
tf.signal.dct
andtf.signal.idct
.  Add LU decomposition op.
 Add quantile loss to gradient boosted trees in estimator.
 Add
round_mode
toQuantizeAndDequantizeV2
op to select rounding algorithm.  Add
unicode_encode
,unicode_decode
,unicode_decode_with_offsets
,unicode_split
,unicode_split_with_offset
, andunicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF8, UTF16BE, UTF32BE)  Add "unit" attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
 Broadcasting support for Ragged Tensors.
SpaceToDepth
supports uint8 data type. Support multilabel quantile regression in estimator.
 We now use "div" as the default partition_strategy in
tf.nn.safe_embedding_lookup_sparse
,tf.nn.sampled_softmax
andtf.nn.nce_loss
.
hyperparameter are ignored.
 Performance
 Improve performance of GPU cumsum/cumprod by up to 300x.
 Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.
 TensorFlow 2.0 Development
 Add a command line tool to convert to TF2.0, tf_upgrade_v2
 Merge
tf.spectral
intotf.signal
for TensorFlow 2.0.  Change the default recurrent activation function for LSTM from 'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with LSTM(recurrent_activation='hard_sigmoid') to fallback to 1.x behavior.
 TensorFlow Lite
 Move from
tensorflow/contrib/lite
totensorflow/lite
.  Add experimental Java API for injecting TensorFlow Lite delegates
 Add support for strings in TensorFlow Lite Java API.
 Move from
tf.contrib
: Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
 Dropout now takes
rate
argument,keep_prob
is deprecated.  Estimator occurrences references
tf.contrib.estimator
were changed totf.estimator
:tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.make_stop_at_checkpoint_step_hook
.
 Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
 Migrate linear optimizer from contrib to core.
 Move
tf.contrib.signal
totf.signal
(preserving aliases in tf.contrib.signal).  Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.
 tf.data:
 Add
tf.data.experimental.StatsOptions()
, to configure options to collect statistics fromtf.data.Dataset
pipeline usingStatsAggregator
. Add nested option,experimental_stats
(which takes atf.data.experimen tal.StatsOptions
object), totf.data.Options
. Deprecatestf.data.experimental.set_stats_agregator
.  Performance optimizations:
 Add
tf.data.experimental.OptimizationOptions()
, to configure options to enabletf.data
performance optimizations. Add nested option,experimental_optimization
(which takes atf.data.experimental.OptimizationOptions
object), totf.data.Options
. Remove performance optimization options fromtf.data.Options
, and add them undertf.data.experimental.OptimizationOptions
instead.  Enable
map_and_batch_fusion
andnoop_elimination
optimizations by default. They can be disabled by configuringtf.data.experimental.OptimizationOptions
to setmap_and_batch = False
ornoop_elimination = False
respectively. To disable all default optimizations, setapply_default_optimizations = False
.  Support parallel map in
map_and_filter_fusion
.  Disable static optimizations for input pipelines that use nonresource
tf.Variable
s.
 Add
 Add NUMAaware MapAndBatch dataset.
 Deprecate
tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.  Deprecate
tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and addedtf.compat.v1.data.make_initializable_iterator()
.  Enable nested dataset support in core
tf.data
transformations.  For
tf.data.Dataset
implementers: Addedtf.data.Dataset._element_structured property
to replaceDataset.output_{types,shapes,classes}
.  Make
num_parallel_calls
oftf.data.Dataset.interleave
andtf.data.Dataset.map
work in Eager mode.
 Add
 Toolchains
 Fixed OpenSSL compatibility by avoiding
EVP_MD_CTX_destroy
.  Added bounds checking to printing deprecation warnings.
 Upgraded CUDA dependency to 10.0
 To build with Android NDK r14b, add "#include <linux/compiler.h>" to androidndkr14b/platforms/android14/arch*/usr/include/linux/futex.h
 Removed
:android_tensorflow_lib_selective_registration*
targets, use:android_tensorflow_lib_lite*
targets instead.
 Fixed OpenSSL compatibility by avoiding
 XLA
 Move
RoundToEven
function to xla/client/lib/math.h.  A new environment variable
TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.  Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
 XLA HLO graphs can now be rendered as SVG/HTML.
 Move
 Estimator
 Replace all occurences of
tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
 Replace all occurences of
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
 Replace all occurrences of
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
 Replace all occurrences of
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
 Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.  Update
regression_head
to the new Head API for Canned Estimator V2.  Switch
multi_class_head
to Head API for Canned Estimator V2.  Replace all occurences of
tf.contrib.estimator.InMemoryEvaluatorHook
andtf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.InMemoryEvaluatorHook
andtf.estimator.experimental.make_stop_at_checkpoint_step_hook
 Migrate linear optimizer from contrib to core.
 Replace all occurences of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, AvijitNervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
Assets
2
Release 1.13.0
Major Features and Improvements
 TensorFlow Lite has moved from contrib to core. This means that Python modules are under
tf.lite
and source code is now undertensorflow/lite
rather thantensorflow/contrib/lite
.  TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
 Support for Python3.7 on all operating systems.
 Moved NCCL to core.
Behavioral changes
 Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in
tf.constant
.  Make the
gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
,convolutional_orthogonal_1D
,convolutional_orthogonal_2D
,convolutional_orthogonal_3D
) have consistent behavior with thetf.initializers.orthogonal
initializer, i.e. scale the output l2norm bygain
and NOT bysqrt(gain)
. (Note that these functions are currently intf.contrib
which is not guaranteed backward compatible).
Bug Fixes and Other Changes
 Documentation
 Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
 Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.
 Deprecations and Symbol renames.
 Removing deprecations for the following endpoints:
tf.acos
,tf.acosh
,tf.add
,tf.as_string
,tf.asin
,tf.asinh
,tf.atan
,tf.atan2
,tf.atanh
,tf.cos
,tf.cosh
,tf.equal
,tf.exp
,tf.floor
,tf.greater
,tf.greater_equal
,tf.less
,tf.less_equal
,tf.log
,tf.logp1
,tf.logical_and
,tf.logical_not
,tf.logical_or
,tf.maximum
,tf.minimum
,tf.not_equal
,tf.sin
,tf.sinh
,tf.tan
 Deprecate
tf.data.Dataset.shard
.  Deprecate
saved_model.loader.load
which is replaced bysaved_model.load
andsaved_model.main_op
, which will be replaced bysaved_model.main_op
in V2.  Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
 Update sklearn imports for deprecated packages.
 Deprecate
Variable.count_up_to
andtf.count_up_to
in favor ofDataset.range
.  Export
confusion_matrix
op astf.math.confusion_matrix
instead oftf.train.confusion_matrix
.  Add
tf.dtypes.
endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints intf.sysconfig.
andtf.version.
; moving all constants undertf.saved_model
submodules totf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.  Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.
 Removing deprecations for the following endpoints:
 Keras & Python API
 Add to Keras functionality analogous to
tf.register_tensor_conversion_function
.  Subclassed Keras models can now be saved through
tf.contrib.saved_model.save_keras_model
. LinearOperator.matmul
now returns a newLinearOperator
.
 Add to Keras functionality analogous to
 New ops and improved op functionality
 Add a Nearest Neighbor Resize op.
 Add an
ignore_unknown
argument toparse_values
which suppresses ValueError for unknown hyperparameter types. Such * Addtf.linalg.matvec
convenience function. tf.einsum()
raisesValueError
for unsupported equations like"ii>"
. Add DCTI and IDCTI in
tf.signal.dct
andtf.signal.idct
.  Add LU decomposition op.
 Add quantile loss to gradient boosted trees in estimator.
 Add
round_mode
toQuantizeAndDequantizeV2
op to select rounding algorithm.  Add
unicode_encode
,unicode_decode
,unicode_decode_with_offsets
,unicode_split
,unicode_split_with_offset
, andunicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF8, UTF16BE, UTF32BE)  Add "unit" attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
 Broadcasting support for Ragged Tensors.
SpaceToDepth
supports uint8 data type. Support multilabel quantile regression in estimator.
 We now use "div" as the default partition_strategy in
tf.nn.safe_embedding_lookup_sparse
,tf.nn.sampled_softmax
andtf.nn.nce_loss
.
hyperparameter are ignored.
 Performance
 Improve performance of GPU cumsum/cumprod by up to 300x.
 Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.
 TensorFlow 2.0 Development
 Add a command line tool to convert to TF2.0, tf_upgrade_v2
 Merge
tf.spectral
intotf.signal
for TensorFlow 2.0.  Change the default recurrent activation function for LSTM from 'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with LSTM(recurrent_activation='hard_sigmoid') to fallback to 1.x behavior.
 TensorFlow Lite
 Move from
tensorflow/contrib/lite
totensorflow/lite
.  Add experimental Java API for injecting TensorFlow Lite delegates
 Add support for strings in TensorFlow Lite Java API.
 Move from
tf.contrib
: Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
 Dropout now takes
rate
argument,keep_prob
is deprecated.  Estimator occurrences references
tf.contrib.estimator
were changed totf.estimator
:tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.make_stop_at_checkpoint_step_hook
.
 Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
 Migrate linear optimizer from contrib to core.
 Move
tf.contrib.signal
totf.signal
(preserving aliases in tf.contrib.signal).  Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.
 tf.data:
 Add
tf.data.experimental.StatsOptions()
, to configure options to collect statistics fromtf.data.Dataset
pipeline usingStatsAggregator
. Add nested option,experimental_stats
(which takes atf.data.experimen tal.StatsOptions
object), totf.data.Options
. Deprecatestf.data.experimental.set_stats_agregator
.  Performance optimizations:
 Add
tf.data.experimental.OptimizationOptions()
, to configure options to enabletf.data
performance optimizations. Add nested option,experimental_optimization
(which takes atf.data.experimental.OptimizationOptions
object), totf.data.Options
. Remove performance optimization options fromtf.data.Options
, and add them undertf.data.experimental.OptimizationOptions
instead.  Enable
map_and_batch_fusion
andnoop_elimination
optimizations by default. They can be disabled by configuringtf.data.experimental.OptimizationOptions
to setmap_and_batch = False
ornoop_elimination = False
respectively. To disable all default optimizations, setapply_default_optimizations = False
.  Support parallel map in
map_and_filter_fusion
.  Disable static optimizations for input pipelines that use nonresource
tf.Variable
s.
 Add
 Add NUMAaware MapAndBatch dataset.
 Deprecate
tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.  Deprecate
tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and addedtf.compat.v1.data.make_initializable_iterator()
.  Enable nested dataset support in core
tf.data
transformations.  For
tf.data.Dataset
implementers: Addedtf.data.Dataset._element_structured property
to replaceDataset.output_{types,shapes,classes}
.  Make
num_parallel_calls
oftf.data.Dataset.interleave
andtf.data.Dataset.map
work in Eager mode.
 Add
 Toolchains
 Fixed OpenSSL compatibility by avoiding
EVP_MD_CTX_destroy
.  Added bounds checking to printing deprecation warnings.
 Upgraded CUDA dependency to 10.0
 To build with Android NDK r14b, add "#include <linux/compiler.h>" to androidndkr14b/platforms/android14/arch*/usr/include/linux/futex.h
 Removed
:android_tensorflow_lib_selective_registration*
targets, use:android_tensorflow_lib_lite*
targets instead.
 Fixed OpenSSL compatibility by avoiding
 XLA
 Move
RoundToEven
function to xla/client/lib/math.h.  A new environment variable
TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.  Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
 XLA HLO graphs can now be rendered as SVG/HTML.
 Move
 Estimator
 Replace all occurences of
tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
 Replace all occurences of
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
 Replace all occurrences of
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
 Replace all occurrences of
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
 Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.  Update
regression_head
to the new Head API for Canned Estimator V2.  Switch
multi_class_head
to Head API for Canned Estimator V2.  Replace all occurences of
tf.contrib.estimator.InMemoryEvaluatorHook
andtf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.InMemoryEvaluatorHook
andtf.estimator.experimental.make_stop_at_checkpoint_step_hook
 Migrate linear optimizer from contrib to core.
 Replace all occurences of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, AvijitNervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
Assets
2
Release 1.13.0 RC2
Major Features and Improvements
 TensorFlow Lite has moved from contrib to core. This means that Python modules are under
tf.lite
and source code is now undertensorflow/lite
rather thantensorflow/contrib/lite
.  TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
 Support for Python3.7 on all operating systems.
 Moved NCCL to core.
Behavioral changes
 Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in
tf.constant
.  Make the
gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
,convolutional_orthogonal_1D
,convolutional_orthogonal_2D
,convolutional_orthogonal_3D
) have consistent behavior with thetf.initializers.orthogonal
initializer, i.e. scale the output l2norm bygain
and NOT bysqrt(gain)
. (Note that these functions are currently intf.contrib
which is not guaranteed backward compatible).
Bug Fixes and Other Changes
 Documentation
 Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
 Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.
 Deprecations and Symbol renames.
 Removing deprecations for the following endpoints:
tf.acos
,tf.acosh
,tf.add
,tf.as_string
,tf.asin
,tf.asinh
,tf.atan
,tf.atan2
,tf.atanh
,tf.cos
,tf.cosh
,tf.equal
,tf.exp
,tf.floor
,tf.greater
,tf.greater_equal
,tf.less
,tf.less_equal
,tf.log
,tf.logp1
,tf.logical_and
,tf.logical_not
,tf.logical_or
,tf.maximum
,tf.minimum
,tf.not_equal
,tf.sin
,tf.sinh
,tf.tan
 Deprecate
tf.data.Dataset.shard
.  Deprecate
saved_model.loader.load
which is replaced bysaved_model.load
andsaved_model.main_op
, which will be replaced bysaved_model.main_op
in V2.  Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
 Update sklearn imports for deprecated packages.
 Deprecate
Variable.count_up_to
andtf.count_up_to
in favor ofDataset.range
.  Export
confusion_matrix
op astf.math.confusion_matrix
instead oftf.train.confusion_matrix
.  Add
tf.dtypes.
endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints intf.sysconfig.
andtf.version.
; moving all constants undertf.saved_model
submodules totf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.  Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.
 Removing deprecations for the following endpoints:
 Keras & Python API
 Add to Keras functionality analogous to
tf.register_tensor_conversion_function
.  Subclassed Keras models can now be saved through
tf.contrib.saved_model.save_keras_model
. LinearOperator.matmul
now returns a newLinearOperator
.
 Add to Keras functionality analogous to
 New ops and improved op functionality
 Add a Nearest Neighbor Resize op.
 Add an
ignore_unknown
argument toparse_values
which suppresses ValueError for unknown hyperparameter types. Such * Addtf.linalg.matvec
convenience function. tf.einsum()
raisesValueError
for unsupported equations like"ii>"
. Add DCTI and IDCTI in
tf.signal.dct
andtf.signal.idct
.  Add LU decomposition op.
 Add quantile loss to gradient boosted trees in estimator.
 Add
round_mode
toQuantizeAndDequantizeV2
op to select rounding algorithm.  Add
unicode_encode
,unicode_decode
,unicode_decode_with_offsets
,unicode_split
,unicode_split_with_offset
, andunicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF8, UTF16BE, UTF32BE)  Add "unit" attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
 Broadcasting support for Ragged Tensors.
SpaceToDepth
supports uint8 data type. Support multilabel quantile regression in estimator.
 We now use "div" as the default partition_strategy in
tf.nn.safe_embedding_lookup_sparse
,tf.nn.sampled_softmax
andtf.nn.nce_loss
.
hyperparameter are ignored.
 Performance
 Improve performance of GPU cumsum/cumprod by up to 300x.
 Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.
 TensorFlow 2.0 Development
 Add a command line tool to convert to TF2.0, tf_upgrade_v2
 Merge
tf.spectral
intotf.signal
for TensorFlow 2.0.  Change the default recurrent activation function for LSTM from 'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with LSTM(recurrent_activation='hard_sigmoid') to fallback to 1.x behavior.
 TensorFlow Lite
 Move from
tensorflow/contrib/lite
totensorflow/lite
.  Add experimental Java API for injecting TensorFlow Lite delegates
 Add support for strings in TensorFlow Lite Java API.
 Move from
tf.contrib
: Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
 Dropout now takes
rate
argument,keep_prob
is deprecated.  Estimator occurrences references
tf.contrib.estimator
were changed totf.estimator
:tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.make_stop_at_checkpoint_step_hook
.
 Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
 Migrate linear optimizer from contrib to core.
 Move
tf.contrib.signal
totf.signal
(preserving aliases in tf.contrib.signal).  Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.
 tf.data:
 Add
tf.data.experimental.StatsOptions()
, to configure options to collect statistics fromtf.data.Dataset
pipeline usingStatsAggregator
. Add nested option,experimental_stats
(which takes atf.data.experimen tal.StatsOptions
object), totf.data.Options
. Deprecatestf.data.experimental.set_stats_agregator
.  Performance optimizations:
 Add
tf.data.experimental.OptimizationOptions()
, to configure options to enabletf.data
performance optimizations. Add nested option,experimental_optimization
(which takes atf.data.experimental.OptimizationOptions
object), totf.data.Options
. Remove performance optimization options fromtf.data.Options
, and add them undertf.data.experimental.OptimizationOptions
instead.  Enable
map_and_batch_fusion
andnoop_elimination
optimizations by default. They can be disabled by configuringtf.data.experimental.OptimizationOptions
to setmap_and_batch = False
ornoop_elimination = False
respectively. To disable all default optimizations, setapply_default_optimizations = False
.  Support parallel map in
map_and_filter_fusion
.  Disable static optimizations for input pipelines that use nonresource
tf.Variable
s.
 Add
 Add NUMAaware MapAndBatch dataset.
 Deprecate
tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.  Deprecate
tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and addedtf.compat.v1.data.make_initializable_iterator()
.  Enable nested dataset support in core
tf.data
transformations.  For
tf.data.Dataset
implementers: Addedtf.data.Dataset._element_structured property
to replaceDataset.output_{types,shapes,classes}
.  Make
num_parallel_calls
oftf.data.Dataset.interleave
andtf.data.Dataset.map
work in Eager mode.
 Add
 Toolchains
 Fixed OpenSSL compatibility by avoiding
EVP_MD_CTX_destroy
.  Added bounds checking to printing deprecation warnings.
 Upgraded CUDA dependency to 10.0
 To build with Android NDK r14b, add "#include <linux/compiler.h>" to androidndkr14b/platforms/android14/arch*/usr/include/linux/futex.h
 Removed
:android_tensorflow_lib_selective_registration*
targets, use:android_tensorflow_lib_lite*
targets instead.
 Fixed OpenSSL compatibility by avoiding
 XLA
 Move
RoundToEven
function to xla/client/lib/math.h.  A new environment variable
TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.  Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
 XLA HLO graphs can now be rendered as SVG/HTML.
 Move
 Estimator
 Replace all occurences of
tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
 Replace all occurences of
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
 Replace all occurrences of
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
 Replace all occurrences of
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
 Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.  Update
regression_head
to the new Head API for Canned Estimator V2.  Switch
multi_class_head
to Head API for Canned Estimator V2.  Replace all occurences of
tf.contrib.estimator.InMemoryEvaluatorHook
andtf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.InMemoryEvaluatorHook
andtf.estimator.experimental.make_stop_at_checkpoint_step_hook
 Migrate linear optimizer from contrib to core.
 Replace all occurences of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, AvijitNervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
Assets
2
Assets
2
Release 1.13.0
Major Features and Improvements
 TensorFlow Lite has moved from contrib to core. This means that Python modules are under
tf.lite
and source code is now undertensorflow/lite
rather thantensorflow/contrib/lite
.  TensorFlow GPU binaries are now built against CUDA 10 and TensorRT 5.0.
 Moved NCCL to core.
Behavioral changes
 Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in
tf.constant
.  Make the
gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
,convolutional_orthogonal_1D
,convolutional_orthogonal_2D
,convolutional_orthogonal_3D
) have consistent behavior with thetf.initializers.orthogonal
initializer, i.e. scale the output l2norm bygain
and NOT bysqrt(gain)
. (Note that these functions are currently intf.contrib
which is not guaranteed backward compatible).
Bug Fixes and Other Changes
 Documentation
 Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
 Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.
 Deprecations and Symbol renames.
 Removing deprecations for the following endpoints:
tf.acos
,tf.acosh
,tf.add
,tf.as_string
,tf.asin
,tf.asinh
,tf.atan
,tf.atan2
,tf.atanh
,tf.cos
,tf.cosh
,tf.equal
,tf.exp
,tf.floor
,tf.greater
,tf.greater_equal
,tf.less
,tf.less_equal
,tf.log
,tf.logp1
,tf.logical_and
,tf.logical_not
,tf.logical_or
,tf.maximum
,tf.minimum
,tf.not_equal
,tf.sin
,tf.sinh
,tf.tan
 Deprecate
tf.data.Dataset.shard
.  Deprecate
saved_model.loader.load
which is replaced bysaved_model.load
andsaved_model.main_op
, which will be replaced bysaved_model.main_op
in V2.  Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
 Update sklearn imports for deprecated packages.
 Deprecate
Variable.count_up_to
andtf.count_up_to
in favor ofDataset.range
.  Export
confusion_matrix
op astf.math.confusion_matrix
instead oftf.train.confusion_matrix
.  Add
tf.dtypes.
endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints intf.sysconfig.
andtf.version.
; moving all constants undertf.saved_model
submodules totf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.  Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.
 Removing deprecations for the following endpoints:
 Keras & Python API
 Add to Keras functionality analogous to
tf.register_tensor_conversion_function
.  Subclassed Keras models can now be saved through
tf.contrib.saved_model.save_keras_model
. LinearOperator.matmul
now returns a newLinearOperator
.
 Add to Keras functionality analogous to
 New ops and improved op functionality
 Add a Nearest Neighbor Resize op.
 Add an
ignore_unknown
argument toparse_values
which suppresses ValueError for unknown hyperparameter types. Such * Addtf.linalg.matvec
convenience function. tf.einsum()
raisesValueError
for unsupported equations like"ii>"
. Add DCTI and IDCTI in
tf.signal.dct
andtf.signal.idct
.  Add LU decomposition op.
 Add quantile loss to gradient boosted trees in estimator.
 Add
round_mode
toQuantizeAndDequantizeV2
op to select rounding algorithm.  Add
unicode_encode
,unicode_decode
,unicode_decode_with_offsets
,unicode_split
,unicode_split_with_offset
, andunicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF8, UTF16BE, UTF32BE)  Add "unit" attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
 Broadcasting support for Ragged Tensors.
SpaceToDepth
supports uint8 data type. Support multilabel quantile regression in estimator.
 We now use "div" as the default partition_strategy in
tf.nn.safe_embedding_lookup_sparse
,tf.nn.sampled_softmax
andtf.nn.nce_loss
.
hyperparameter are ignored.
 Performance
 Improve performance of GPU cumsum/cumprod by up to 300x.
 Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.
 TensorFlow 2.0 Development
 Add a command line tool to convert to TF2.0, tf_upgrade_v2
 Merge
tf.spectral
intotf.signal
for TensorFlow 2.0.  Change the default recurrent activation function for LSTM from 'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with LSTM(recurrent_activation='hard_sigmoid') to fallback to 1.x behavior.
 TensorFlow Lite
 Move from
tensorflow/contrib/lite
totensorflow/lite
.  Add experimental Java API for injecting TensorFlow Lite delegates
 Add support for strings in TensorFlow Lite Java API.
 Move from
tf.contrib
: Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
 Dropout now takes
rate
argument,keep_prob
is deprecated.  Estimator occurrences references
tf.contrib.estimator
were changed totf.estimator
:tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.make_stop_at_checkpoint_step_hook
.
 Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
 Migrate linear optimizer from contrib to core.
 Move
tf.contrib.signal
totf.signal
(preserving aliases in tf.contrib.signal).  Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.
 tf.data:
 Add
tf.data.experimental.StatsOptions()
, to configure options to collect statistics fromtf.data.Dataset
pipeline usingStatsAggregator
. Add nested option,experimental_stats
(which takes atf.data.experimen tal.StatsOptions
object), totf.data.Options
. Deprecatestf.data.experimental.set_stats_agregator
.  Performance optimizations:
 Add
tf.data.experimental.OptimizationOptions()
, to configure options to enabletf.data
performance optimizations. Add nested option,experimental_optimization
(which takes atf.data.experimental.OptimizationOptions
object), totf.data.Options
. Remove performance optimization options fromtf.data.Options
, and add them undertf.data.experimental.OptimizationOptions
instead.  Enable
map_and_batch_fusion
andnoop_elimination
optimizations by default. They can be disabled by configuringtf.data.experimental.OptimizationOptions
to setmap_and_batch = False
ornoop_elimination = False
respectively. To disable all default optimizations, setapply_default_optimizations = False
.  Support parallel map in
map_and_filter_fusion
.  Disable static optimizations for input pipelines that use nonresource
tf.Variable
s.
 Add
 Add NUMAaware MapAndBatch dataset.
 Deprecate
tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.  Deprecate
tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and addedtf.compat.v1.data.make_initializable_iterator()
.  Enable nested dataset support in core
tf.data
transformations.  For
tf.data.Dataset
implementers: Addedtf.data.Dataset._element_structured property
to replaceDataset.output_{types,shapes,classes}
.
 Add
 Toolchains
 Fixed OpenSSL compatibility by avoiding
EVP_MD_CTX_destroy
.  Added bounds checking to printing deprecation warnings.
 Upgraded CUDA dependency to 10.0
 To build with Android NDK r14b, add "#include <linux/compiler.h>" to androidndkr14b/platforms/android14/arch*/usr/include/linux/futex.h
 Removed
:android_tensorflow_lib_selective_registration*
targets, use:android_tensorflow_lib_lite*
targets instead.
 Fixed OpenSSL compatibility by avoiding
 XLA
 Move
RoundToEven
function to xla/client/lib/math.h.  A new environment variable
TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.  Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
 XLA HLO graphs can now be rendered as SVG/HTML.
 Move
 Estimator
 Replace all occurences of
tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
 Replace all occurences of
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
 Replace all occurrences of
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
 Replace all occurrences of
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
 Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.  Update
regression_head
to the new Head API for Canned Estimator V2.  Switch
multi_class_head
to Head API for Canned Estimator V2.  Replace all occurences of
tf.contrib.estimator.InMemoryEvaluatorHook
andtf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.InMemoryEvaluatorHook
andtf.estimator.experimental.make_stop_at_checkpoint_step_hook
 Migrate linear optimizer from contrib to core.
 Replace all occurences of
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, AvijitNervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
Assets
2
Release 1.13.0
Major Features and Improvements
 TensorFlow Lite has moved from contrib to core. This means that Python modules are under
tf.lite
and source code is now undertensorflow/lite
rather thantensorflow/contrib/lite
.  TensorFlow GPU binaries are now built against CUDA 10.
 Moved NCCL to core.
Behavioral changes
 Disallow conversion of python floating types to uint32/64 (matching behavior of other integer types) in
tf.constant
.  Make the
gain
argument of convolutional orthogonal initializers (convolutional_delta_orthogonal
,convolutional_orthogonal_1D
,convolutional_orthogonal_2D
,convolutional_orthogonal_3D
) have consistent behavior with thetf.initializers.orthogonal
initializer, i.e. scale the output l2norm bygain
and NOT bysqrt(gain)
. (Note that these functions are currently intf.contrib
which is not guaranteed backward compatible).
Bug Fixes and Other Changes
 Documentation
 Update the doc with the details about the rounding mode used in quantize_and_dequantize_v2.
 Clarify that tensorflow::port::InitMain() should be called before using the TensorFlow library. Programs failing to do this are not portable to all platforms.
 Deprecations and Symbol renames.
 Removing deprecations for the following endpoints:
tf.acos
,tf.acosh
,tf.add
,tf.as_string
,tf.asin
,tf.asinh
,tf.atan,
tf.atan2,
tf.atanh,
tf.cos,
tf.cosh,
tf.equal,
tf.exp,
tf.floor,
tf.greater,
tf.greater_equal,
tf.less,
tf.less_equal,
tf.log,
tf.logp1,
tf.logical_and,
tf.logical_not,
tf.logical_or,
tf.maximum,
tf.minimum,
tf.not_equal,
tf.sin,
tf.sinh,
tf.tan`  Deprecate
tf.data.Dataset.shard
.  Deprecate
saved_model.loader.load
which is replaced bysaved_model.load
andsaved_model.main_op
, which will be replaced bysaved_model.main_op
in V2.  Deprecate tf.QUANTIZED_DTYPES. The official new symbol is tf.dtypes.QUANTIZED_DTYPES.
 Update sklearn imports for deprecated packages.
 Deprecate
Variable.count_up_to
andtf.count_up_to
in favor ofDataset.range
.  Export
confusion_matrix
op astf.math.confusion_matrix
instead oftf.train.confusion_matrix
.  Add
tf.dtypes.
endpoint for every constant in dtypes.py; moving endpoints in versions.py to corresponding endpoints intf.sysconfig.
andtf.version.
; moving all constants undertf.saved_model
submodules totf.saved_model
module. New endpoints are added in V1 and V2 but existing endpoint removals are only applied in V2.  Deprecates behavior where device assignment overrides collocation constraints inside a collocation context manager.
 Removing deprecations for the following endpoints:
 Keras & Python API
 Add to Keras functionality analogous to
tf.register_tensor_conversion_function
.  Subclassed Keras models can now be saved through
tf.contrib.saved_model.save_keras_model
. LinearOperator.matmul
now returns a newLinearOperator
.
 Add to Keras functionality analogous to
 New ops and improved op functionality
 Add a Nearest Neighbor Resize op.
 Add an
ignore_unknown
argument toparse_values
which suppresses ValueError for unknown hyperparameter types. Such * Addtf.linalg.matvec
convenience function. tf.einsum()
raisesValueError
for unsupported equations like"ii>"
. Add DCTI and IDCTI in
tf.signal.dct
andtf.signal.idct
.  Add LU decomposition op.
 Add quantile loss to gradient boosted trees in estimator.
 Add
round_mode
toQuantizeAndDequantizeV2
op to select rounding algorithm.  Add
unicode_encode
,unicode_decode
,unicode_decode_with_offsets
,unicode_split
,unicode_split_with_offset
, andunicode_transcode
ops. Amongst other things, this Op adds the ability to encode, decode, and transcode a variety of input text encoding formats into the main Unicode encodings (UTF8, UTF16BE, UTF32BE)  Add "unit" attribute to the substr op, which allows obtaining the substring of a string containing unicode characters.
 Broadcasting support for Ragged Tensors.
SpaceToDepth
supports uint8 data type. Support multilabel quantile regression in estimator.
 We now use "div" as the default partition_strategy in
tf.nn.safe_embedding_lookup_sparse
,tf.nn.sampled_softmax
andtf.nn.nce_loss
.
hyperparameter are ignored.
 Performance
 Improve performance of GPU cumsum/cumprod by up to 300x.
 Added support for weight decay in most TPU embedding optimizers, including AdamW and MomentumW.
 TensorFlow 2.0 Development
 Add a command line tool to convert to TF2.0, tf_upgrade_v2
 Merge
tf.spectral
intotf.signal
for TensorFlow 2.0.  Change the default recurrent activation function for LSTM from 'hard_sigmoid' to 'sigmoid' in 2.0. Historically recurrent activation is 'hard_sigmoid' since it is fast than 'sigmoid'. With new unified backend between CPU and GPU mode, since the CuDNN kernel is using sigmoid, we change the default for CPU mode to sigmoid as well. With that, the default LSTM will be compatible with both CPU and GPU kernel. This will enable user with GPU to use CuDNN kernel by default and get a 10x performance boost in training. Note that this is checkpoint breaking change. If user want to use their 1.x pretrained checkpoint, please construct the layer with LSTM(recurrent_activation='hard_sigmoid') to fallback to 1.x behavior.
 TensorFlow Lite
 Move from
tensorflow/contrib/lite
totensorflow/lite
.  Add experimental Java API for injecting TensorFlow Lite delegates
 Add support for strings in TensorFlow Lite Java API.
 Move from
tf.contrib
: Add Apache Ignite Filesystem plugin to support accessing Apache IGFS.
 Dropout now takes
rate
argument,keep_prob
is deprecated.  Estimator occurrences references
tf.contrib.estimator
were changed totf.estimator
:tf.contrib.estimator.BaselineEstimator
withtf.estimator.BaselineEstimator
tf.contrib.estimator.DNNLinearCombinedEstimator
withtf.estimator.DNNLinearCombinedEstimator
tf.contrib.estimator.DNNEstimator
withtf.estimator.DNNEstimator
tf.contrib.estimator.LinearEstimator
withtf.estimator.LinearEstimator
tf.contrib.estimator.InMemoryEvaluatorHook
and tf.estimator.experimental.InMemoryEvaluatorHook`.tf.contrib.estimator.make_stop_at_checkpoint_step_hook
withtf.estimator.experimental.make_stop_at_checkpoint_step_hook
.
 Expose `tf.distribute.Strategy as the new name for tf.contrib.distribute.DistributionStrategy.
 Migrate linear optimizer from contrib to core.
 Move
tf.contrib.signal
totf.signal
(preserving aliases in tf.contrib.signal).  Users of
tf.contrib.estimator.export_all_saved_models
and related should switch totf.estimator.Estimator.experimental_export_all_saved_models
.
 tf.data:
 Add
tf.data.experimental.StatsOptions()
, to configure options to collect statistics fromtf.data.Dataset
pipeline using dStatsAggregatord. Adds option "experimental_stats" totf.data.Options
which takestf.data.experimental.StatsOptions
object. Deprecatestf.data.experimental.set_stats_agregator
.  NUMAaware MapAndBatch dataset.
 Parallel map and filter fusion.
 Deprecate
tf.data.Dataset.make_one_shot_iterator()
in V1, removed it from V2, and added tf.compat.v1.data.make_one_shot_iterator()`.  Deprecate
tf.data.Dataset.make_initializable_iterator()
in V1, removed it from V2, and addedtf.compat.v1.data.make_initializable_iterator()
.  Enable nested dataset support in core
tf.data
transformations.  For
tf.data.Dataset
implementers: Addedtf.data.Dataset._element_structured property
to replaceDataset.output_{types,shapes,classes}
.
 Add
 Toolchains
 Fixed OpenSSL compatibility by avoiding
EVP_MD_CTX_destroy
.  Added bounds checking to printing deprecation warnings.
 Upgraded CUDA dependency to 10.0
 To build with Android NDK r14b, add "#include <linux/compiler.h>" to androidndkr14b/platforms/android14/arch*/usr/include/linux/futex.h
 Removed
:android_tensorflow_lib_selective_registration*
targets, use:android_tensorflow_lib_lite*
targets instead.
 Fixed OpenSSL compatibility by avoiding
 XLA
 Move
RoundToEven
function to xla/client/lib/math.h.  A new environment variable
TF_XLA_DEBUG_OPTIONS_PASSTHROUGH
set to "1" or "true" allows the debug options passed within an XRTCompile op to be passed directly to the XLA compilation backend. If such variable is not set (service side), only a restricted set will be passed through.  Allow the XRTCompile op to return the ProgramShape resulted form the XLA compilation as a second return argument.
 XLA HLO graphs can now be rendered as SVG/HTML.
 Move
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abhinav Upadhyay, Ag Ramesh, akikaaa, Alexis Louis, Anders Huss, Andreas Madsen, Andrew Banchich, Andy Craze, Anton Dmitriev, Artem Malykh, AvijitNervana, Balint Cristian, Benjamin Tan Wei Hao, Bhavani Subramanian, Brendan Finan, Brian Nemsick, Bryan Cutler, By Shen, Cao Zongyan, Castiel, Chris Antaki, Christian Goll, Cibifang, Clayne Robison, Codrut Grosu, Cong Xu, Dalmo Cirne, Daniel Hunter, Dougal J. Sutherland, Edvard Fagerholm, EFanZh, Erik Smistad, Evgeniy Polyakov, Feiyang Chen, franklin5, Fred Reiss, Gautam, gehring, Geoffrey Irving, George Sterpu, Gitea, Grzegorz George Pawelczak, Guozhong Zhuang, himkt, Hoeseong Kim, Huan Li (李卓桓), HuiyangFei, hyunyoung, Isaac Burbank, jackonan, Jacky Ko, Jason Furmanek, Jason Zaman, Javier Luraschi, Jiang,Zhoulong, joaak, John Lin, Jonathan Wyatt Hoech, josephyearsley, Josh Gordon, Julian Niedermeier, Karl Lessard, Keno Fischer, lanhin, Leon Graser, leondgarse, Li, Guizi, Li, Yiqiang, lxl910915, Mahmoud Abuzaina, manhyuk, Marcela Morales Quispe, margaretmz, Matt Conley, Max Pumperla, mbhuiyan, mdfaijul, Meng, Peng, Michael, Michael Gielda, mrTsjolder, Muhammad Wildan, neargye, Nehal J Wani, NEWPLAN, Niranjan Hasabnis, Nutti, olicht, Pan Daoxin, Pedro Monreal, Peng Yu, pillarpond, Pooya Davoodi, qiezi, Rholais Lii, Richard Yu, Rin Arakaki, Roger Iyengar, sahilbadyal, Sami Kama, Sandip Giri, Scott Leishman, Serge Panev, Seunghoon Park, Shafi Dayatar, shengfuintel, Shimin Guo, Siju, silent567, Stefan Dyulgerov, steven, Tao Wei, Thor Johnsen, Tingbo Lu, tomguluson92, Tongxuan Liu, Trevor Morris, Ubuntu, Vadim Borisov, vanderliang, wangsiyu, Wen Yun, WenHeng (Jack) Chung, wenxizhu, William D. Irons, Xiaoming (Jason) Cui, Yan Facai (颜发才), Yanbo Liang, Yaniv Blumenfeld, Yash Gaurkar, Yicheng Fan, Yong Tang, Yongjoon Lee, Yuan (Terry) Tang, Yuxin Wu, zldrobit
Assets
2
Release 1.12.0
Major Features and Improvements
 Keras models can now be directly exported to the SavedModel format(
tf.contrib.saved_model.save_keras_model()
) and used with Tensorflow Serving.  Keras models now support evaluating with a
tf.data.Dataset
.  TensorFlow binaries are built with XLA support linked in by default.
 Ignite Dataset added to contrib/ignite that allows to work with Apache Ignite.
Bug Fixes and Other Changes
tf.data
:tf.data
users can now represent, get, and set options of TensorFlow input pipelines usingtf.data.Options()
,tf.data.Dataset.options()
, andtf.data.Dataset.with_options()
respectively. New
tf.data.Dataset.reduce()
API allows users to reduce a finite dataset to a single element using a userprovided reduce function.  New
tf.data.Dataset.window()
API allows users to create finite windows of input dataset; when combined with thetf.data.Dataset.reduce()
API, this allows users to implement customized batching.  All C++ code moves to the
tensorflow::data
namespace.  Add support for
num_parallel_calls
totf.data.Dataset.interleave
.
tf.contrib
: Remove
tf.contrib.linalg
.tf.linalg
should be used instead.  Replace any calls to
tf.contrib.get_signature_def_by_key(metagraph_def, signature_def_key)
withmeta_graph_def.signature_def[signature_def_key]
. Catching a ValueError exception thrown bytf.contrib.get_signature_def_by_key
should be replaced by catching a KeyError exception.
 Remove
tf.contrib.data
 Deprecate, and replace by tf.data.experimental.
 Other:
 Improved XLA stability and performance.
 Fix single replica TensorBoard summary stats in Cloud ML Engine.
 TPUEstimator: Initialize dataset iterators in parallel.
 Keras on TPU model quality and bug fixes.
 Instead of jemalloc, revert back to using system malloc since it simplifies build and has comparable performance.
 Remove integer types from
tf.nn.softplus
andtf.nn.softsign
OpDefs. This is a bugfix; these ops were never meant to support integers.  Allow subslicing Tensors with a single dimension.
 Add option to calculate string length in Unicode characters
 Add functionality to SubSlice a tensor.
 Add searchsorted (ie lower/upper_bound) op.
 Add model explainability to Boosted Trees.
 Support negative positions for tf.substr
 There was previously a bug in the bijector_impl where the _reduce_jacobian_det_over_event does not handle scalar ILDJ implementations properly.
 In tf eager execution, allow reentering a GradientTape context
 Add tf_api_version flag. If define=tf_api_version=2 flag is passed in, then bazel will build TensorFlow API version 2.0. Note that TensorFlow 2.0 is under active development and has no guarantees at this point.
 Add additional compression options to TfRecordWriter
 Performance improvements for regex full match operations.
 Replace
tf.GraphKeys.VARIABLES
withtf.GraphKeys.GLOBAL_VARIABLES
 Remove unused dynamic learning rate support.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
(David) SiuKei Muk, Ag Ramesh, Anton Dmitriev, Artem Sobolev, AvijitNervana, Bairen Yi, Bruno Goncalves, By Shen, candy.dc, Cheng Chen, Clayne Robison, coder3101, Dao Zhang, Elms, Fei Hu, feiquan, Geoffrey Irving, Guozhong Zhuang, hellcom, Hoeseong Kim, imsheridan, Jason Furmanek, Jason Zaman, Jenny Sahng, jiefangxuanyan, Johannes Bannhofer, Jonathan Homer, KoanSin Tan, kouml, Loo Rong Jie, Lukas Geiger, manipopopo, Ming Li, Moritz KröGer, Naurril, Niranjan Hasabnis, Pan Daoxin, Peng Yu, pengwa, rasmi, Roger Xin, Roland Fernandez, Sami Kama, Samuel Matzek, Sangjung Woo, Sergei Lebedev, Sergii Khomenko, shaohua, Shaohua Zhang, Shujian2015, Sunitha Kambhampati, tomguluson92, ViníCius Camargo, wangsiyu, weidankong, WenHeng (Jack) Chung, William D. Irons, Xin Jin, Yan Facai (颜发才), Yanbo Liang, Yash Katariya, Yong Tang, 在原佐为
Assets
2
Release 1.12.0rc2
Differences from 1.12.0rc1
 Improved XLA stability and performance.
 Fix single replica TensorBoard summary stats in Cloud ML Engine.
Differences from 1.12.0rc0
 Keras on TPU model quality and bug fixes.
 TPUEstimator: Initialize dataset iterators in parallel.
Major Features and Improvements
 Keras models can now be directly exported to the SavedModel format(
tf.contrib.saved_model.save_keras_model()
) and used with Tensorflow Serving.  Keras models now support evaluating with a
tf.data.Dataset
.  TensorFlow binaries are built with XLA support linked in by default.
 Ignite Dataset added to contrib/ignite that allows to work with Apache Ignite.
Bug Fixes and Other Changes
tf.data
:tf.data
users can now represent, get, and set options of TensorFlow input pipelines usingtf.data.Options()
,tf.data.Dataset.options()
, andtf.data.Dataset.with_options()
respectively. New
tf.data.Dataset.reduce()
API allows users to reduce a finite dataset to a single element using a userprovided reduce function.  New
tf.data.Dataset.window()
API allows users to create finite windows of input dataset; when combined with thetf.data.Dataset.reduce()
API, this allows users to implement customized batching.  All C++ code moves to the
tensorflow::data
namespace.  Add support for
num_parallel_calls
totf.data.Dataset.interleave
.
tf.contrib
: Remove
tf.contrib.linalg
.tf.linalg
should be used instead.  Replace any calls to
tf.contrib.get_signature_def_by_key(metagraph_def, signature_def_key)
withmeta_graph_def.signature_def[signature_def_key]
. Catching a ValueError exception thrown bytf.contrib.get_signature_def_by_key
should be replaced by catching a KeyError exception.
 Remove
tf.contrib.data
 Deprecate, and replace by tf.data.experimental.
 Other:
 Instead of jemalloc, revert back to using system malloc since it simplifies build and has comparable performance.
 Remove integer types from
tf.nn.softplus
andtf.nn.softsign
OpDefs. This is a bugfix; these ops were never meant to support integers.  Allow subslicing Tensors with a single dimension.
 Add option to calculate string length in Unicode characters
 Add functionality to SubSlice a tensor.
 Add searchsorted (ie lower/upper_bound) op.
 Add model explainability to Boosted Trees.
 Support negative positions for tf.substr
 There was previously a bug in the bijector_impl where the _reduce_jacobian_det_over_event does not handle scalar ILDJ implementations properly.
 In tf eager execution, allow reentering a GradientTape context
 Add tf_api_version flag. If define=tf_api_version=2 flag is passed in, then bazel will build TensorFlow API version 2.0. Note that TensorFlow 2.0 is under active development and has no guarantees at this point.
 Add additional compression options to TfRecordWriter
 Performance improvements for regex full match operations.
 Replace
tf.GraphKeys.VARIABLES
withtf.GraphKeys.GLOBAL_VARIABLES
 Remove unused dynamic learning rate support.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
(David) SiuKei Muk, Ag Ramesh, Anton Dmitriev, Artem Sobolev, AvijitNervana, Bairen Yi, Bruno Goncalves, By Shen, candy.dc, Cheng Chen, Clayne Robison, coder3101, Dao Zhang, Elms, Fei Hu, feiquan, Geoffrey Irving, Guozhong Zhuang, hellcom, Hoeseong Kim, imsheridan, Jason Furmanek, Jason Zaman, Jenny Sahng, jiefangxuanyan, Johannes Bannhofer, Jonathan Homer, KoanSin Tan, kouml, Loo Rong Jie, Lukas Geiger, manipopopo, Ming Li, Moritz KröGer, Naurril, Niranjan Hasabnis, Pan Daoxin, Peng Yu, pengwa, rasmi, Roger Xin, Roland Fernandez, Sami Kama, Samuel Matzek, Sangjung Woo, Sergei Lebedev, Sergii Khomenko, shaohua, Shaohua Zhang, Shujian2015, Sunitha Kambhampati, tomguluson92, ViníCius Camargo, wangsiyu, weidankong, WenHeng (Jack) Chung, William D. Irons, Xin Jin, Yan Facai (颜发才), Yanbo Liang, Yash Katariya, Yong Tang, 在原佐为
TensorFlow 1.10.0
av8ramit released this
Release 1.10.0
Major Features And Improvements
 The
tf.lite
runtime now supportscomplex64
.  Initial Bigtable integration for
tf.data
.  Improved local run behavior in
tf.estimator.train_and_evaluate
which does not reload checkpoints for evaluation. RunConfig
now sets device_filters to restrict how workers and PS can communicate. This can speed up training and ensure clean shutdowns in some situations. But if you have jobs that require communication between workers, you will have to set custom session_options in yourRunConfig
. Moved Distributions and Bijectors from
tf.contrib.distributions
to Tensorflow Probability (TFP).tf.contrib.distributions
is now deprecated and will be removed by the end of 2018.  Adding new endpoints for existing tensorflow symbols. These endpoints are going to be the preferred endpoints going forward and may replace some of the existing endpoints in the future. See below for the complete list. New symbols have been added to the following modules:
tf.debugging
,tf.dtypes
,tf.image
,tf.io
,tf.linalg
,tf.manip
,tf.math
,tf.quantization
,tf.strings
Breaking Changes
 Prebuilt binaries are now (as of TensorFlow 1.10) built against NCCL 2.2 and no longer include NCCL in the binary install. TensorFlow usage with multiple GPUs and NCCL requires upgrade to NCCL 2.2. See updated install guides: Installing TensorFlow on Ubuntu and Install TensorFlow from Sources.
 Starting from TensorFlow 1.11, Windows builds will use Bazel. Therefore, we will drop official support for cmake.
Bug Fixes and Other Changes
tf.data
:tf.contrib.data.group_by_reducer()
is now available via the public API.tf.contrib.data.choose_from_datasets()
is now available via the public API. Adding
drop_remainder
argument totf.data.Dataset.batch()
andtf.data.Dataset.padded_batch()
, deprecatingtf.contrib.data.batch_and_drop_remainder()
andtf.contrib.data.padded_batch_and_drop_remainder()
.
tf.estimator
:Estimator
s now use custom savers included inEstimatorSpec
scaffolds for saving SavedModels during export.EstimatorSpec
will now add a default prediction output for export if noexport_output
is provided, eliminating the need to explicitly include aPredictOutput
object in themodel_fn
for simple usecases. Support sparse_combiner in canned Linear Estimators.
 Added batch normalization to
DNNClassifier
,DNNRegressor
, andDNNEstimator
.  Adding ranking support for boosted trees.
 Adding center bias option for boosted trees.
 Add
synchronization
andaggregation
args to get_variable(). These args will be used for distributed variables.  Add
synchronization
andaggregation
args to the layeradd_weight()
API. These args will be used for distributed variables. tf.losses.*
do not add to the global collection when executing eagerly (to avoid leaking memory). Support different summary and checkpoint directories in
tf.train.MonitoredTrainingSession()
.  Added IndRNN, IndyGRU, and IndyLSTM cells to
tf.contrib.rnn
.  Add safe static factory functions for SparseTensor and convert all CHECKs to DCHECKs. Using the constructor directly is unsafe and deprecated.
 Make the Bigtable client connection pool configurable & increase the default # of connections for performance.
 Added derivative of
tf.random_gamma
with respect to the alpha parameter.  Added derivative of
tf.igamma(a, x)
andtf.igammac(a, x)
with respect to a.  Modified Bessel functions of order zero and one.
 Add FillTriangular Bijector to create triangular matrices.
 Added support for Type III DCT, and
tf.spectral.idct(type=23)
.  Correctly handle CuDNN RNN weight loaded when nest in
TimeDistributed
.  Adding perelement weight support for
WALSComputePartialLhsAndRhsOp
.  ZerosLike and OnesLike ops treated as constants by Graph Transform Tool.
 Gamma distribution and the derived distributions (Beta, Dirichlet, Student's t, inverse Gamma) now fully reparameterized.
 Java: Experimental wrapper classes to make graph generation easier. Thanks @karllessard and @kbsriram
 Build & link in secure gRPC components (switch from the insecure grpc dependency to secure grpc dependency).
 Adding new endpoints for existing tensorflow symbols. These endpoints are going to be the preferred endpoints going forward and may replace some of the existing endpoints in the future. List of new endpoints:
 New endpoints in
tf.image
namespace:tf.image.extract_image_patches
 New endpoints in
tf.debugging
namespace:tf.debugging.check_numerics
,tf.debugging.is_finite
,tf.debugging.is_inf
,tf.debugging.is_nan
.  New endpoints in
tf.dtypes
namespace:tf.dtypes.as_string
.  New endpoints in
tf.io
namespace:tf.io.decode_base64
,tf.io.decode_compressed
,tf.io.decode_json_example
,tf.io.decode_raw
,tf.io.encode_base64
,tf.io.matching_files
,tf.io.parse_tensor
,tf.io.read_file,
tf.io.write_file`.  New endpoints in tf.linalg namespace:
tf.linalg.cross
,tf.linalg.tensor_diag
(corresponds totf.diag
),tf.linalg.tensor_diag_part
(corresponds totf.diag_part
).  New endpoints in tf.manip namespace:
tf.manip.batch_to_space_nd
,tf.manip.gather_nd
,tf.manip.reshape
,tf.manip.reverse
,tf.manip.scatter_nd
,tf.manip.space_to_batch_nd
,tf.manip.tile
 New endpoints in tf.math namespace:
tf.math.acos
,tf.math.acosh
,tf.math.add
,tf.math.asin
,tf.math.asinh
,tf.math.atan
,tf.math.atan2
,tf.math.atanh
,tf.math.betainc
,tf.math.ceil
,tf.math.cos
,tf.math.cosh
,tf.math.digamma
,tf.math.equal
,tf.math.erfc
,tf.math.exp
,tf.math.expm1
,tf.math.floor
,tf.math.greater
,tf.math.greater_equal
,tf.math.igamma
,tf.math.igammac
,tf.math.invert_permutation
,tf.math.less
,tf.math.less_equal
,tf.math.lgamma
,tf.math.log
,tf.math.log1p
,tf.math.logical_and
,tf.math.logical_not
,tf.math.logical_or
,tf.math.maximum
,tf.math.minimum
,tf.math.not_equal
,tf.math.polygamma
,tf.math.reciprocal
,tf.math.rint
,tf.math.rsqrt
,tf.math.segment_max
,tf.math.segment_mean
,tf.math.segment_min
,tf.math.segment_prod
,tf.math.segment_sum
,tf.math.sin
,tf.math.sinh
,tf.math.softplus
,tf.math.softsign
,tf.math.squared_difference
,tf.math.tan
,tf.math.unsorted_segment_max
,tf.math.unsorted_segment_min
,tf.math.unsorted_segment_prod
,tf.math.unsorted_segment_sum
,tf.math.zeta
.  New endpoints in
tf.quantization
namespace:tf.quantization.dequantize
,tf.quantization.fake_quant_with_min_max_args
,tf.quantization.fake_quant_with_min_max_args_gradient
,tf.quantization.fake_quant_with_min_max_vars
,tf.quantization.fake_quant_with_min_max_vars_gradient
,tf.quantization.fake_quant_with_min_max_vars_per_channel
,tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient
.  New endpoints in tf.strings namespace:
tf.strings.join
(corresponds totf.string_join
),tf.strings.regex_replace
,tf.strings.to_number
(corresponds totf.string_to_number
),tf.strings.strip
(corresponds totf.string_strip
),tf.strings.substr
,tf.strings.to_hash_bucket
(corresponds totf.string_to_hash_bucket
),tf.strings.to_hash_bucket_fast
(corresponds totf.string_to_hash_bucket_fast
),tf.strings.to_hash_bucket_strong
(corresponds totf.string_to_hash_bucket_strong
).
 New endpoints in
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Ag Ramesh, Alex Wiltschko, Alexander Pantyukhin, Amogh Mannekote, An Jiaoyang, Andrei Nigmatulin, Andrew Ginns, BjøRn Moholt, Brett Koonce, Chengzhi Chen, Chinmay Das, Christian Ertler, Christoph Boeddeker, Clayne Robison, Courtial Florian, ctiijima, Dan Douthit, Dan J, Dan Ringwalt, EFanZh, Emanuele Ballarin, eqy, Evgeniy Zheltonozhskiy, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, G K, gracehoney, Guillaume Klein, Guozhong Zhuang, HsienYang Li, hsm207, ImSheridan, Jayaram Bobba, Jiandong Ruan, Jie, Joel Shor, Jonas Rauber, Jongmin Baek, jsawruk, Karan Kaw, Karl Lessard, karl@kubx.ca, Kb Sriram, KinmanLam, leiiwang, Li, Yiqiang, Loo Rong Jie, Mahmoud Abuzaina, Mahmoud Aslan, ManHyuk, Martin Patz, Martin Zeitler, mktozk, Mohammad Ashraf Bhuiyan, mrTsjolder, Naman Bhalla, Nick Felt, Nicolas Lopez, Niranjan Hasabnis, Nishidha Panpaliya, Nitish, nrstott, Nutti, Parag Jain, PeterLee, Philipp Jund, Rach L, Rafal Wojdyla, Roland Zimmermann, Sergei Lebedev, SneakyFish5, Soila Kavulya, Sriram Veturi, Steven Schmatz, Taehoon Lee, Tang, Wenyi, Taras Sereda, Ted Chang, Tim Zaman, Tristan Rice, tucan, vchigrin, Vikram Tiwari, Vincent, WeberXie, William D. Irons, Yan Facai (颜发才), Yong Tang, Yu Yi, Yuxin Wu, Zé ViníCius
TensorFlow 1.10.0rc1
av8ramit released this
Release 1.10.0
Major Features And Improvements
 The
tf.lite
runtime now supportscomplex64
.  Initial Bigtable integration for
tf.data
.  Improved local run behavior in
tf.estimator.train_and_evaluate
which does not reload checkpoints for evaluation. RunConfig
now sets device_filters to restrict how workers and PS can communicate. This can speed up training and ensure clean shutdowns in some situations. But if you have jobs that require communication between workers, you will have to set custom session_options in yourRunConfig
. Moved Distributions and Bijectors from
tf.contrib.distributions
to Tensorflow Probability (TFP).tf.contrib.distributions
is now deprecated and will be removed by the end of 2018.  Adding new endpoints for existing tensorflow symbols. These endpoints are going to be the preferred endpoints going forward and may replace some of the existing endpoints in the future. See below for the complete list. New symbols have been added to the following modules:
tf.debugging
,tf.dtypes
,tf.image
,tf.io
,tf.linalg
,tf.manip
,tf.math
,tf.quantization
,tf.strings
Breaking Changes
 Prebuilt binaries are now (as of TensorFlow 1.10) built against NCCL 2.2 and no longer include NCCL in the binary install. TensorFlow usage with multiple GPUs and NCCL requires upgrade to NCCL 2.2. See updated install guides: Installing TensorFlow on Ubuntu and Install TensorFlow from Sources.
 Starting from TensorFlow 1.11, Windows builds will use Bazel. Therefore, we will drop official support for cmake.
Bug Fixes and Other Changes
tf.data
:tf.contrib.data.group_by_reducer()
is now available via the public API.tf.contrib.data.choose_from_datasets()
is now available via the public API. Adding
drop_remainder
argument totf.data.Dataset.batch()
andtf.data.Dataset.padded_batch()
, deprecating tf.contrib.data.batch_and_drop_remainder()and
tf.contrib.data.padded_batch_and_drop_remainder()`.
tf.estimator
:Estimator
s now use custom savers included inEstimatorSpec
scaffolds for saving SavedModels during export.EstimatorSpec
will now add a default prediction output for export if noexport_output
is provided, eliminating the need to explicitly include aPredictOutput
object in themodel_fn
for simple usecases. Support sparse_combiner in canned Linear Estimators.
 Added batch normalization to
DNNClassifier
,DNNRegressor
, andDNNEstimator
.  Adding ranking support for boosted trees.
 Adding center bias option for boosted trees.
 Add
synchronization
andaggregation
args to get_variable(). These args will be used for distributed variables.  Add
synchronization
andaggregation
args to the layeradd_weight()
API. These args will be used for distributed variables. tf.losses.*
do not add to the global collection when executing eagerly (to avoid leaking memory). Support different summary and checkpoint directories in
tf.train.MonitoredTrainingSession()
.  Added IndRNN, IndyGRU, and IndyLSTM cells to
tf.contrib.rnn
.  Add safe static factory functions for SparseTensor and convert all CHECKs to DCHECKs. Using the constructor directly is unsafe and deprecated.
 Make the Bigtable client connection pool configurable & increase the default # of connections for performance.
 Added derivative of
tf.random_gamma
with respect to the alpha parameter.  Added derivative of
tf.igamma(a, x)
andtf.igammac(a, x)
with respect to a.  Modified Bessel functions of order zero and one.
 Add FillTriangular Bijector to create triangular matrices.
 Added support for Type III DCT, and
tf.spectral.idct(type=23)
.  Correctly handle CuDNN RNN weight loaded when nest in
TimeDistributed
.  Adding perelement weight support for
WALSComputePartialLhsAndRhsOp
.  ZerosLike and OnesLike ops treated as constants by Graph Transform Tool.
 Gamma distribution and the derived distributions (Beta, Dirichlet, Student's t, inverse Gamma) now fully reparameterized.
 Java: Experimental wrapper classes to make graph generation easier. Thanks @karllessard and @kbsriram
 Build & link in secure gRPC components (switch from the insecure grpc dependency to secure grpc dependency).
 Adding new endpoints for existing tensorflow symbols. These endpoints are going to be the preferred endpoints going forward and may replace some of the existing endpoints in the future. List of new endpoints:
 New endpoints in
tf.image
namespace:tf.image.extract_image_patches
 New endpoints in
tf.debugging
namespace:tf.debugging.check_numerics
,tf.debugging.is_finite
,tf.debugging.is_inf
,tf.debugging.is_nan
.  New endpoints in
tf.dtypes
namespace:tf.dtypes.as_string
.  New endpoints in
tf.io
namespace:tf.io.decode_base64
,tf.io.decode_compressed
,tf.io.decode_json_example
,tf.io.decode_raw
,tf.io.encode_base64
,tf.io.matching_files
,tf.io.parse_tensor
,tf.io.read_file,
tf.io.write_file`.  New endpoints in tf.linalg namespace:
tf.linalg.cross
,tf.linalg.tensor_diag
(corresponds totf.diag
),tf.linalg.tensor_diag_part
(corresponds totf.diag_part
).  New endpoints in tf.manip namespace:
tf.manip.batch_to_space_nd
,tf.manip.gather_nd
,tf.manip.reshape
,tf.manip.reverse
,tf.manip.scatter_nd
,tf.manip.space_to_batch_nd
,tf.manip.tile
 New endpoints in tf.math namespace:
tf.math.acos
,tf.math.acosh
,tf.math.add
,tf.math.asin
,tf.math.asinh
,tf.math.atan
,tf.math.atan2
,tf.math.atanh
,tf.math.betainc
,tf.math.ceil
,tf.math.cos
,tf.math.cosh
,tf.math.digamma
,tf.math.equal
,tf.math.erfc
,tf.math.exp
,tf.math.expm1
,tf.math.floor
,tf.math.greater
,tf.math.greater_equal
,tf.math.igamma
,tf.math.igammac
,tf.math.invert_permutation
,tf.math.less
,tf.math.less_equal
,tf.math.lgamma
,tf.math.log
,tf.math.log1p
,tf.math.logical_and
,tf.math.logical_not
,tf.math.logical_or
,tf.math.maximum
,tf.math.minimum
,tf.math.not_equal
,tf.math.polygamma
,tf.math.reciprocal
,tf.math.rint
,tf.math.rsqrt
,tf.math.segment_max
,tf.math.segment_mean
,tf.math.segment_min
,tf.math.segment_prod
,tf.math.segment_sum
,tf.math.sin
,tf.math.sinh
,tf.math.softplus
,tf.math.softsign
,tf.math.squared_difference
,tf.math.tan
,tf.math.unsorted_segment_max
,tf.math.unsorted_segment_min
,tf.math.unsorted_segment_prod
,tf.math.unsorted_segment_sum
,tf.math.zeta
.  New endpoints in
tf.quantization
namespace:tf.quantization.dequantize
,tf.quantization.fake_quant_with_min_max_args
,tf.quantization.fake_quant_with_min_max_args_gradient
,tf.quantization.fake_quant_with_min_max_vars
,tf.quantization.fake_quant_with_min_max_vars_gradient
,tf.quantization.fake_quant_with_min_max_vars_per_channel
,tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient
.  New endpoints in tf.strings namespace:
tf.strings.join
(corresponds totf.string_join
),tf.strings.regex_replace
,tf.strings.to_number
(corresponds totf.string_to_number
),tf.strings.strip
(corresponds totf.string_strip
),tf.strings.substr
,tf.strings.to_hash_bucket
(corresponds totf.string_to_hash_bucket
),tf.strings.to_hash_bucket_fast
(corresponds totf.string_to_hash_bucket_fast
),tf.strings.to_hash_bucket_strong
(corresponds totf.string_to_hash_bucket_strong
).
 New endpoints in
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Ag Ramesh, Alex Wiltschko, Alexander Pantyukhin, Amogh Mannekote, An Jiaoyang, Andrei Nigmatulin, Andrew Ginns, BjøRn Moholt, Brett Koonce, Chengzhi Chen, Chinmay Das, Christian Ertler, Christoph Boeddeker, Clayne Robison, Courtial Florian, ctiijima, Dan Douthit, Dan J, Dan Ringwalt, EFanZh, Emanuele Ballarin, eqy, Evgeniy Zheltonozhskiy, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, G K, gracehoney, Guillaume Klein, Guozhong Zhuang, HsienYang Li, hsm207, ImSheridan, Jayaram Bobba, Jiandong Ruan, Jie, Joel Shor, Jonas Rauber, Jongmin Baek, jsawruk, Karan Kaw, Karl Lessard, karl@kubx.ca, Kb Sriram, KinmanLam, leiiwang, Li, Yiqiang, Loo Rong Jie, Mahmoud Abuzaina, Mahmoud Aslan, ManHyuk, Martin Patz, Martin Zeitler, mktozk, Mohammad Ashraf Bhuiyan, mrTsjolder, Naman Bhalla, Nick Felt, Nicolas Lopez, Niranjan Hasabnis, Nishidha Panpaliya, Nitish, nrstott, Nutti, Parag Jain, PeterLee, Philipp Jund, Rach L, Rafal Wojdyla, Roland Zimmermann, Sergei Lebedev, SneakyFish5, Soila Kavulya, Sriram Veturi, Steven Schmatz, Taehoon Lee, Tang, Wenyi, Taras Sereda, Ted Chang, Tim Zaman, Tristan Rice, tucan, vchigrin, Vikram Tiwari, Vincent, WeberXie, William D. Irons, Yan Facai (颜发才), Yong Tang, Yu Yi, Yuxin Wu, Zé ViníCius
TensorFlow 1.10.0rc0
case540 released this
Release 1.10.0
Major Features And Improvements
 The
tf.lite
runtime now supportscomplex64
.  Initial Bigtable integration for
tf.data
.  Improved local run behavior in
tf.estimator.train_and_evaluate
which does not reload checkpoints for evaluation. RunConfig
now sets device_filters to restrict how workers and PS can communicate. This can speed up training and ensure clean shutdowns in some situations. But if you have jobs that require communication between workers, you will have to set custom session_options in yourRunConfig
. Moved Distributions and Bijectors from
tf.contrib.distributions
to Tensorflow Probability (TFP).tf.contrib.distributions
is now deprecated and will be removed by the end of 2018.  Adding new endpoints for existing tensorflow symbols. These endpoints are going to be the preferred endpoints going forward and may replace some of the existing endpoints in the future. See below for the complete list. New symbols have been added to the following modules:
tf.debugging
,tf.dtypes
,tf.image
,tf.io
,tf.linalg
,tf.manip
,tf.math
,tf.quantization
,tf.strings
Breaking Changes
 Prebuilt binaries are now (as of TensorFlow 1.10) built against NCCL 2.2 and no longer include NCCL in the binary install. TensorFlow usage with multiple GPUs and NCCL requires upgrade to NCCL 2.2. See updated install guides: Installing TensorFlow on Ubuntu and Install TensorFlow from Sources.
 Starting from TensorFlow 1.11, Windows builds will use Bazel. Therefore, we will drop official support for cmake.
Bug Fixes and Other Changes
tf.data
:tf.contrib.data.group_by_reducer()
is now available via the public API.tf.contrib.data.choose_from_datasets()
is now available via the public API. Adding
drop_remainder
argument totf.data.Dataset.batch()
andtf.data.Dataset.padded_batch()
, deprecating tf.contrib.data.batch_and_drop_remainder()and
tf.contrib.data.padded_batch_and_drop_remainder()`.
tf.estimator
:Estimator
s now use custom savers included inEstimatorSpec
scaffolds for saving SavedModels during export.EstimatorSpec
will now add a default prediction output for export if noexport_output
is provided, eliminating the need to explicitly include aPredictOutput
object in themodel_fn
for simple usecases. Support sparse_combiner in canned Linear Estimators.
 Added batch normalization to
DNNClassifier
,DNNRegressor
, andDNNEstimator
.  Adding ranking support for boosted trees.
 Adding center bias option for boosted trees.
 Add
synchronization
andaggregation
args to get_variable(). These args will be used for distributed variables.  Add
synchronization
andaggregation
args to the layeradd_weight()
API. These args will be used for distributed variables. tf.losses.*
do not add to the global collection when executing eagerly (to avoid leaking memory). Support different summary and checkpoint directories in
tf.train.MonitoredTrainingSession()
.  Added IndRNN, IndyGRU, and IndyLSTM cells to
tf.contrib.rnn
.  Add safe static factory functions for SparseTensor and convert all CHECKs to DCHECKs. Using the constructor directly is unsafe and deprecated.
 Make the Bigtable client connection pool configurable & increase the default # of connections for performance.
 Added derivative of
tf.random_gamma
with respect to the alpha parameter.  Added derivative of
tf.igamma(a, x)
andtf.igammac(a, x)
with respect to a.  Modified Bessel functions of order zero and one.
 Add FillTriangular Bijector to create triangular matrices.
 Added support for Type III DCT, and
tf.spectral.idct(type=23)
.  Correctly handle CuDNN RNN weight loaded when nest in
TimeDistributed
.  Adding perelement weight support for
WALSComputePartialLhsAndRhsOp
.  ZerosLike and OnesLike ops treated as constants by Graph Transform Tool.
 Gamma distribution and the derived distributions (Beta, Dirichlet, Student's t, inverse Gamma) now fully reparameterized.
 Java: Experimental wrapper classes to make graph generation easier. Thanks @karllessard and @kbsriram
 Build & link in secure gRPC components (switch from the insecure grpc dependency to secure grpc dependency).
 Adding new endpoints for existing tensorflow symbols. These endpoints are going to be the preferred endpoints going forward and may replace some of the existing endpoints in the future. List of new endpoints:
 New endpoints in
tf.image
namespace:tf.image.extract_image_patches
 New endpoints in
tf.debugging
namespace:tf.debugging.check_numerics
,tf.debugging.is_finite
,tf.debugging.is_inf
,tf.debugging.is_nan
.  New endpoints in
tf.dtypes
namespace:tf.dtypes.as_string
.  New endpoints in
tf.io
namespace:tf.io.decode_base64
,tf.io.decode_compressed
,tf.io.decode_json_example
,tf.io.decode_raw
,tf.io.encode_base64
,tf.io.matching_files
,tf.io.parse_tensor
,tf.io.read_file,
tf.io.write_file`.  New endpoints in tf.linalg namespace:
tf.linalg.cross
,tf.linalg.tensor_diag
(corresponds totf.diag
),tf.linalg.tensor_diag_part
(corresponds totf.diag_part
).  New endpoints in tf.manip namespace:
tf.manip.batch_to_space_nd
,tf.manip.gather_nd
,tf.manip.reshape
,tf.manip.reverse
,tf.manip.scatter_nd
,tf.manip.space_to_batch_nd
,tf.manip.tile
 New endpoints in tf.math namespace:
tf.math.acos
,tf.math.acosh
,tf.math.add
,tf.math.asin
,tf.math.asinh
,tf.math.atan
,tf.math.atan2
,tf.math.atanh
,tf.math.betainc
,tf.math.ceil
,tf.math.cos
,tf.math.cosh
,tf.math.digamma
,tf.math.equal
,tf.math.erfc
,tf.math.exp
,tf.math.expm1
,tf.math.floor
,tf.math.greater
,tf.math.greater_equal
,tf.math.igamma
,tf.math.igammac
,tf.math.invert_permutation
,tf.math.less
,tf.math.less_equal
,tf.math.lgamma
,tf.math.log
,tf.math.log1p
,tf.math.logical_and
,tf.math.logical_not
,tf.math.logical_or
,tf.math.maximum
,tf.math.minimum
,tf.math.not_equal
,tf.math.polygamma
,tf.math.reciprocal
,tf.math.rint
,tf.math.rsqrt
,tf.math.segment_max
,tf.math.segment_mean
,tf.math.segment_min
,tf.math.segment_prod
,tf.math.segment_sum
,tf.math.sin
,tf.math.sinh
,tf.math.softplus
,tf.math.softsign
,tf.math.squared_difference
,tf.math.tan
,tf.math.unsorted_segment_max
,tf.math.unsorted_segment_min
,tf.math.unsorted_segment_prod
,tf.math.unsorted_segment_sum
,tf.math.zeta
.  New endpoints in
tf.quantization
namespace:tf.quantization.dequantize
,tf.quantization.fake_quant_with_min_max_args
,tf.quantization.fake_quant_with_min_max_args_gradient
,tf.quantization.fake_quant_with_min_max_vars
,tf.quantization.fake_quant_with_min_max_vars_gradient
,tf.quantization.fake_quant_with_min_max_vars_per_channel
,tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient
.  New endpoints in tf.strings namespace:
tf.strings.join
(corresponds totf.string_join
),tf.strings.regex_replace
,tf.strings.to_number
(corresponds totf.string_to_number
),tf.strings.strip
(corresponds totf.string_strip
),tf.strings.substr
,tf.strings.to_hash_bucket
(corresponds totf.string_to_hash_bucket
),tf.strings.to_hash_bucket_fast
(corresponds totf.string_to_hash_bucket_fast
),tf.strings.to_hash_bucket_strong
(corresponds totf.string_to_hash_bucket_strong
).
 New endpoints in
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Ag Ramesh, Alex Wiltschko, Alexander Pantyukhin, Amogh Mannekote, An Jiaoyang, Andrei Nigmatulin, Andrew Ginns, BjøRn Moholt, Brett Koonce, Chengzhi Chen, Chinmay Das, Christian Ertler, Christoph Boeddeker, Clayne Robison, Courtial Florian, ctiijima, Dan Douthit, Dan J, Dan Ringwalt, EFanZh, Emanuele Ballarin, eqy, Evgeniy Zheltonozhskiy, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, G K, gracehoney, Guillaume Klein, Guozhong Zhuang, HsienYang Li, hsm207, ImSheridan, Jayaram Bobba, Jiandong Ruan, Jie, Joel Shor, Jonas Rauber, Jongmin Baek, jsawruk, Karan Kaw, Karl Lessard, karl@kubx.ca, Kb Sriram, KinmanLam, leiiwang, Li, Yiqiang, Loo Rong Jie, Mahmoud Abuzaina, Mahmoud Aslan, ManHyuk, Martin Patz, Martin Zeitler, mktozk, Mohammad Ashraf Bhuiyan, mrTsjolder, Naman Bhalla, Nick Felt, Nicolas Lopez, Niranjan Hasabnis, Nishidha Panpaliya, Nitish, nrstott, Nutti, Parag Jain, PeterLee, Philipp Jund, Rach L, Rafal Wojdyla, Roland Zimmermann, Sergei Lebedev, SneakyFish5, Soila Kavulya, Sriram Veturi, Steven Schmatz, Taehoon Lee, Tang, Wenyi, Taras Sereda, Ted Chang, Tim Zaman, Tristan Rice, tucan, vchigrin, Vikram Tiwari, Vincent, WeberXie, William D. Irons, Yan Facai (颜发才), Yong Tang, Yu Yi, Yuxin Wu, Zé ViníCius
TensorFlow 1.9.0
case540 released this
Release 1.9.0
Major Features And Improvements
 Updated docs for
tf.keras
: New Kerasbased get started and programmers guide page.  Update
tf.keras
to the Keras 2.1.6 API.  Added
tf.keras.layers.CuDNNGRU
andtf.keras.layers.CuDNNLSTM
layers. Try it.  Adding support of core feature columns and losses to gradient boosted trees estimators.
 The python interface for the TFLite Optimizing Converter has been expanded, and the command line interface (AKA:
toco
,tflite_convert
) is once again included in the standardpip
installation.  Improved dataloading and text processing with:
 Added experimental support for new premade Estimators:
 The distributions.Bijector API supports broadcasting for Bijectors with new API changes.
Breaking Changes
 If you're opening empty variable scopes; replace
variable_scope('', ...)
byvariable_scope(tf.get_variable_scope(), ...)
.  Headers used for building custom ops have been moved from sitepackages/external into sitepackages/tensorflow/include/external.
Bug Fixes and Other Changes

tfe.Network
is deprecated. Please inherit fromtf.keras.Model
. 
Layered variable names have changed in the following conditions:
 Using
tf.keras.layers
with custom variable scopes.  Using
tf.layers
in a subclassedtf.keras.Model
class. See here for more details
 Using

tf.data
:Dataset.from_generator()
now accepts anargs
list, in order to create nested generators.Dataset.list_files()
now produces determinstic results whenshuffle=False
or aseed
is passed.tf.contrib.data.sample_from_datasets()
andtf.contrib.data.choose_from_datasets()
make it easier to sample or deterministically choose elements from multiple datasets.tf.contrib.data.make_csv_dataset()
now supports line breaks in quoted strings, and two infrequently used arguments removed. (C++)
DatasetBase::DebugString()
is nowconst
.  (C++)
DatasetBase::MakeIterator()
has been renamed toDatasetBase::MakeIteratorInternal()
.  (C++)
IteratorBase::Initialize()
method was added to support raising errors during iterator construction.

Eager Execution:
 Added the ability to pause recording operations for gradient computation via
tf.GradientTape.stop_recording
.  Updated documentation, introductory notebooks.
 Added the ability to pause recording operations for gradient computation via

tf.keras
: Move Keras code out of _impl folder and remove API files.
tf.keras.Model.save_weights
now saves in TensorFlow format by default. Enable dataset iterators to be passed to
tf.keras.Model
training/eval methods.

TensorFlow Debugger (tfdbg)
 Fix an issue in which the TensorBoard Debugger Plugin could not handle total source file size exceeding gRPC message size limit (4 MB).

tf.contrib
:tf.contrib.framework.zero_initializer
supports ResourceVariable. Adding "constrained_optimization" to tensorflow/contrib.

Other:
 Add GCS Configuration Ops.
 Changing signature of
MakeIterator
to enable propagating error status.  KL divergence for two Dirichlet distributions.
 More consistent GcsFileSystem behavior for certain reads past EOF.
 Update benchmark for tf.scan to match ranges across eager and graph modes.
 Fixed bug in
tf.reduce_prod gradient
for complex dtypes.  Allow the use of '.' in variables (e.g. "hparams.parse('a.b=1.0')"), which would previously raise an error. This will correspond to an attribute name with an embedded '.' symbol (e.g. 'a.b'), which can only be accessed indirectly (e.g. through getattr and setattr). To set this up the user will first need to explicitly add the variable to the hparam object (e.g. "hparams.add_hparam(name='a.b', value=0.0)").
 Benchmark for tf.scan in graph and eager modes.
 Added complex128 support to FFT, FFT2D, FFT3D, IFFT, IFFT2D, and IFFT3D.
 Making ids unique in
nn.embedding_lookup_sparse
. This helps to reduce RPC calls for looking up the embeddings when there are repeated ids in the batch.  Support indicator column in boosted trees.
 Prevent
tf.gradients()
from backpropagating through integer tensors.  LinearOperator[1D,2D,3D]Circulant added to
tensorflow.linalg
.  Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter now supports arbitrary.
 Added
tf.train.Checkpoint
for reading/writing objectbased checkpoints.  Added LinearOperatorKronecker, a densefree implementation of the Kronecker Product.
 Allow LinearOperator to broadcast.
 SavedModelBuilder will now deduplicate asset names that point to files with the same basename and the same contents. Note that this may result in new asset files included in SavedModels in cases where assets with the same name but different contents were previously overwriting each other.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abdullah Alrasheed, Achal Shah, Ad530, ADiegoCAlonso, Aditya Yogi, Ag Ramesh, akindyakov, Andy Kernahan, Anya Petrova, Aurelien Geron, Ben, Ben Barsdell, BhavaniSubramanian, braincodercn, Brett Koonce, Brian Nemsick, Brian Zier, Bryan Heden, candy.dc, cclauss, Clayne Robison, ctiijima, Dalmo Cirne, David Norman, David T.H. Kao, DosLin, ekelsen, Elson Rodriguez, Erik Smistad, Felix Abecassis, Fergal Cotter, fo40225, foo0x29a, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, gdh1995, Geoffrey Irving, Giuseppe, gracehoney, Guido Zuidhof, Guillaume Klein, Guozhong Zhuang, Haggai, Harald Husum, imsheridan, Ivan Zhang, Jan Zikes, Jayaram Bobba, Jesse Benson, Jesse Gumz, Jiajia Li, Jie, jinghuangintel, Jingwen, jjsjann123, Joe Yearsley, Joel Hestness, Joel Shor, josephyearsley, Junpeng Lao, Karol M. Langner, Kb Sriram, krantideep95, Krish Ravindranath, Letian Feng, Loo Rong Jie, Lukas Geiger, Maciej, Mahmoud Abuzaina, ManHyuk, Mark Ryan, mbhuiyan, Michal Turek, Mostafa Alaa, Myungsung Kwak, Nand Dalal, Nehal J Wani, Neil Tenenholtz, ngc92, Nicholas Nadeau, P.Eng., Avs, Niranjan Hasabnis, PHidringer, Paul Van Eck, Peng Yu, Qing Zhao, Qingying Chen, Quanlong, Rajendra Arora, Rholais Lii, rmanyari, Robin Richtsfeld, Russell Klopfer, Sagi, Sam Sendelbach, Sandeep N Gupta, Sandip Giri, Sarah Edkins, Scott Tseng, Sdalbsoo, Sergii Khomenko, Seungwoo Choi (Biggie), Seyed Majid Azimi, Shaoning Zeng, shengfuintel, Siu Kei, Muk, Smit Shilu, soonson, Stefan Schweter, Sukhwan Kim, Sunitha Kambhampati, Taehoon Lee, tamimaddari82, Tang, Wenyi, Ted Chang, u2takey, Utkarsh Upadhyay, Vadim Markovtsev, voegtlel, Wai Hon Law, wangsiyu, Wenhao Hu, wenhao.hu, William D. Irons, Yan Facai (颜发才), Yanbo Liang, Yihong Wang, Yilei (Dolee) Yang, Yong Tang, Yuan (Terry) Tang
TensorFlow 1.9.0rc2
av8ramit released this
Release 1.9.0
Major Features And Improvements
 Updated docs for
tf.keras
: New Kerasbased get started,
and programmers guide page.  Update
tf.keras
to the Keras 2.1.6 API.  Added
tf.keras.layers.CuDNNGRU
andtf.keras.layers.CuDNNLSTM
layers. Try it.  Adding support of core feature columns and losses to gradient boosted trees estimators.
 The python interface
for the TFLite Optimizing Converter
has been expanded, and the command line interface (AKA:toco
,tflite_convert
) is once again
included in the standardpip
installation.  Improved dataloading and text processing with:
 Added experimental support for new premade Estimators:
 The distributions.Bijector
API supports broadcasting for Bijectors with new API changes.
Breaking Chances
 If you're opening empty variable scopes; replace
variable_scope('', ...)
by
variable_scope(tf.get_variable_scope(), ...)
.  Headers used for building custom ops have been moved from sitepackages/external into sitepackages/tensorflow/include/external.
Bug Fixes and Other Changes
tfe.Network
is deprecated. Please inherit fromtf.keras.Model
. Layered variable names have changed in the following conditions:
 Using
tf.keras.layers
with custom variable scopes.  Using
tf.layers
in a subclassedtf.keras.Model
class. See
here for more details
 Using
tf.data
: The
DatasetBase::DebugString()
method is nowconst
.  Added the
tf.contrib.data.sample_from_datasets()
API for randomly sampling from multiple datasets.
 The
 Eager Execution:
tf.keras
: Move Keras code out of _impl folder and remove API files.
tf.keras.Model.save_weights
now saves in TensorFlow format by default. Enable dataset iterators to be passed to
tf.keras.Model
training/eval methods.
 Accelerated Linear Algebra (XLA):
 TensorFlow Debugger (tfdbg): fix an issue in which the TensorBoard Debugger Plugin could not handle total source file size exceeding gRPC message size limit (4 MB).
tf.contrib
: Add
tf.contrib.data.choose_from_datasets()
. tf.contrib.data.make_csv_dataset()
now supports line breaks in quoted strings. Two arguments were removed frommake_csv_dataset
.tf.contrib.framework.zero_initializer
supports ResourceVariable. Adding "constrained_optimization" to tensorflow/contrib.
 Add
 Other:
 Add GCS Configuration Ops.
 Changing signature of
MakeIterator
to enable propagating error status.  KL divergence for two Dirichlet distributions.
 More consistent GcsFileSystem behavior for certain reads past EOF.
 Update benchmark for tf.scan to match ranges across eager and graph modes.
 Fixed bug in
tf.reduce_prod gradient
for complex dtypes.  Add optional
args
argument toDataset.from_generator()
.  Allow the use of '.' in variables (e.g. "hparams.parse('a.b=1.0')"), which would previously raise an error. This will correspond to an attribute name with an embedded '.' symbol (e.g. 'a.b'), which can only be accessed indirectly (e.g. through getattr and setattr). To set this up the user will first need to explicitly add the variable to the hparam object (e.g. "hparams.add_hparam(name='a.b', value=0.0)").
 Benchmark for tf.scan in graph and eager modes.
 Added complex128 support to FFT, FFT2D, FFT3D, IFFT, IFFT2D, and IFFT3D.
 Making ids unique in
nn.embedding_lookup_sparse
. This helps to reduce RPC calls for looking up the embeddings when there are repeated ids in the batch.  Support indicator column in boosted trees.
 Prevent
tf.gradients()
from backpropagating through integer tensors.  LinearOperator[1D,2D,3D]Circulant added to
tensorflow.linalg
.  Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter now supports arbitrary.
 Added
tf.train.Checkpoint
for reading/writing objectbased checkpoints. Dataset.list_files()
now produces determinstic results whenshuffle=False
or aseed
is passed. Added LinearOperatorKronecker, a densefree implementation of the Kronecker Product.
 Allow LinearOperator to broadcast.
 SavedModelBuilder will now deduplicate asset names that point to files with the same basename and the same contents. Note that this may result in new asset files included in SavedModels in cases where assets with the same name but different contents were previously overwriting each other.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abdullah Alrasheed, Achal Shah, Ad530, ADiegoCAlonso, Aditya Yogi, Ag Ramesh, akindyakov, Andy Kernahan, Anya Petrova, Aurelien Geron, Ben, Ben Barsdell, BhavaniSubramanian, braincodercn, Brett Koonce, Brian Nemsick, Brian Zier, Bryan Heden, candy.dc, cclauss, Clayne Robison, ctiijima, Dalmo Cirne, David Norman, David T.H. Kao, DosLin, ekelsen, Elson Rodriguez, Erik Smistad, Felix Abecassis, Fergal Cotter, fo40225, foo0x29a, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, gdh1995, Geoffrey Irving, Giuseppe, gracehoney, Guido Zuidhof, Guillaume Klein, Guozhong Zhuang, Haggai, Harald Husum, imsheridan, Ivan Zhang, Jan Zikes, Jayaram Bobba, Jesse Benson, Jesse Gumz, Jiajia Li, Jie, jinghuangintel, Jingwen, jjsjann123, Joe Yearsley, Joel Hestness, Joel Shor, josephyearsley, Junpeng Lao, Karol M. Langner, Kb Sriram, krantideep95, Krish Ravindranath, Letian Feng, Loo Rong Jie, Lukas Geiger, Maciej, Mahmoud Abuzaina, ManHyuk, Mark Ryan, mbhuiyan, Michal Turek, Mostafa Alaa, Myungsung Kwak, Nand Dalal, Nehal J Wani, Neil Tenenholtz, ngc92, Nicholas Nadeau, P.Eng., Avs, Niranjan Hasabnis, PHidringer, Paul Van Eck, Peng Yu, Qing Zhao, Qingying Chen, Quanlong, Rajendra Arora, Rholais Lii, rmanyari, Robin Richtsfeld, Russell Klopfer, Sagi, Sam Sendelbach, Sandeep N Gupta, Sandip Giri, Sarah Edkins, Scott Tseng, Sdalbsoo, Sergii Khomenko, Seungwoo Choi (Biggie), Seyed Majid Azimi, Shaoning Zeng, shengfuintel, Siu Kei, Muk, Smit Shilu, soonson, Stefan Schweter, Sukhwan Kim, Sunitha Kambhampati, Taehoon Lee, tamimaddari82, Tang, Wenyi, Ted Chang, u2takey, Utkarsh Upadhyay, Vadim Markovtsev, voegtlel, Wai Hon Law, wangsiyu, Wenhao Hu, wenhao.hu, William D. Irons, Yan Facai (颜发才), Yanbo Liang, Yihong Wang, Yilei (Dolee) Yang, Yong Tang, Yuan (Terry) Tang
TensorFlow 1.9.0rc1
case540 released this
Release 1.9.0
Major Features And Improvements
 Update tf.keras to the Keras 2.1.6 API.
tfe.Network
is deprecated. Please inherit fromtf.keras.Model
. Adding support of core feature columns and losses to gradient boosted trees estimators.
 The distributions.Bijector API supports broadcasting for Bijectors with new API changes. See here for more details.
 Layered variable names have changed in the following conditions:
 Using
tf.keras.layers
with custom variable scopes.  Using
tf.layers
in a subclassedtf.keras.Model
class. See here for more details
 Using
Breaking Chances
 If you're opening empty variable scopes; replace
variable_scope
('', ...) byvariable_scope
(tf.get_variable_scope()
, ...).
Bug Fixes and Other Changes
tf.data
:Dataset.from_generator()
now accepts anargs
list, in order to create nested generators.Dataset.list_files()
now produces determinstic results whenshuffle=False
or aseed
is passed.tf.contrib.data.sample_from_datasets()
andtf.contrib.data.choose_from_datasets()
make it easier to sample or deterministically choose elements from multiple datasets.tf.contrib.data.make_csv_dataset()
now supports line breaks in quoted strings, and two infrequently used arguments removed. (C++)
DatasetBase::DebugString()
is nowconst
.  (C++)
DatasetBase::MakeIterator()
has been renamed toDatasetBase::MakeIteratorInternal()
.  (C++)
IteratorBase::Initialize()
method was added to support raising errors during iterator construction.
 Eager Execution:
tf.keras
: Move Keras code out of _impl folder and remove API files.
tf.keras.Model.save_weights
now saves in TensorFlow format by default. Enable dataset iterators to be passed to
tf.keras.Model
training/eval methods.
 TensorFlow Debugger (tfdbg) CLI: fix an issue in which the TensorBoard Debugger Plugin could not handle total source file size exceeding gRPC message size limit (4 MB).
tf.contrib
:tf.contrib.framework.zero_initializer
supports ResourceVariable. Adding "constrained_optimization" to tensorflow/contrib.
 Other:
 Add GCS Configuration Ops.
 Changing signature of
MakeIterator
to enable propagating error status.  KL divergence for two Dirichlet distributions.
 More consistent GcsFileSystem behavior for certain reads past EOF.
 Update benchmark for tf.scan to match ranges across eager and graph modes.
 Fixed bug in
tf.reduce_prod gradient
for complex dtypes.  Allow the use of '.' in variables (e.g. "hparams.parse('a.b=1.0')"), which would previously raise an error. This will correspond to an attribute name with an embedded '.' symbol (e.g. 'a.b'), which can only be accessed indirectly (e.g. through getattr and setattr). To set this up the user will first need to explicitly add the variable to the hparam object (e.g. "hparams.add_hparam(name='a.b', value=0.0)").
 Benchmark for tf.scan in graph and eager modes.
 Added complex128 support to FFT, FFT2D, FFT3D, IFFT, IFFT2D, and IFFT3D.
 Making ids unique in
nn.embedding_lookup_sparse
. This helps to reduce RPC calls for looking up the embeddings when there are repeated ids in the batch.  Support indicator column in boosted trees.
 Prevent
tf.gradients()
from backpropagating through integer tensors.  LinearOperator[1D,2D,3D]Circulant added to
tensorflow.linalg
.  Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter now supports arbitrary.
 Added
tf.train.Checkpoint
for reading/writing objectbased checkpoints.  Added LinearOperatorKronecker, a densefree implementation of the Kronecker Product.
 Allow LinearOperator to broadcast.
 SavedModelBuilder will now deduplicate asset names that point to files with the same basename and the same contents. Note that this may result in new asset files included in SavedModels in cases where assets with the same name but different contents were previously overwriting each other.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abdullah Alrasheed, Achal Shah, Ad530, ADiegoCAlonso, Aditya Yogi, Ag Ramesh, akindyakov, Andy Kernahan, Anya Petrova, Aurelien Geron, Ben, Ben Barsdell, BhavaniSubramanian, braincodercn, Brett Koonce, Brian Nemsick, Brian Zier, Bryan Heden, candy.dc, cclauss, Clayne Robison, ctiijima, Dalmo Cirne, David Norman, David T.H. Kao, DosLin, ekelsen, Elson Rodriguez, Erik Smistad, Felix Abecassis, Fergal Cotter, fo40225, foo0x29a, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, gdh1995, Geoffrey Irving, Giuseppe, gracehoney, Guido Zuidhof, Guillaume Klein, Guozhong Zhuang, Haggai, Harald Husum, imsheridan, Ivan Zhang, Jan Zikes, Jayaram Bobba, Jesse Benson, Jesse Gumz, Jiajia Li, Jie, jinghuangintel, Jingwen, jjsjann123, Joe Yearsley, Joel Hestness, Joel Shor, josephyearsley, Junpeng Lao, Karol M. Langner, Kb Sriram, krantideep95, Krish Ravindranath, Letian Feng, Loo Rong Jie, Lukas Geiger, Maciej, Mahmoud Abuzaina, ManHyuk, Mark Ryan, mbhuiyan, Michal Turek, Mostafa Alaa, Myungsung Kwak, Nand Dalal, Nehal J Wani, Neil Tenenholtz, ngc92, Nicholas Nadeau, P.Eng., Avs, Niranjan Hasabnis, PHidringer, Paul Van Eck, Peng Yu, Qing Zhao, Qingying Chen, Quanlong, Rajendra Arora, Rholais Lii, rmanyari, Robin Richtsfeld, Russell Klopfer, Sagi, Sam Sendelbach, Sandeep N Gupta, Sandip Giri, Sarah Edkins, Scott Tseng, Sdalbsoo, Sergii Khomenko, Seungwoo Choi (Biggie), Seyed Majid Azimi, Shaoning Zeng, shengfuintel, Siu Kei, Muk, Smit Shilu, soonson, Stefan Schweter, Sukhwan Kim, Sunitha Kambhampati, Taehoon Lee, tamimaddari82, Tang, Wenyi, Ted Chang, u2takey, Utkarsh Upadhyay, Vadim Markovtsev, voegtlel, Wai Hon Law, wangsiyu, Wenhao Hu, wenhao.hu, William D. Irons, Yan Facai (颜发才), Yanbo Liang, Yihong Wang, Yilei (Dolee) Yang, Yong Tang, Yuan (Terry) Tang
TensorFlow 1.9.0rc0
av8ramit released this
Release 1.9.0
Major Features And Improvements
 Update tf.keras to the Keras 2.1.6 API.
tfe.Network
is deprecated. Please inherit fromtf.keras.Model
. Adding support of core feature columns and losses to gradient boosted trees estimators.
 The distributions.Bijector API supports broadcasting for Bijectors with new API changes. See here for more details.
 Layered variable names have changed in the following conditions:
 Using
tf.keras.layers
with custom variable scopes.  Using
tf.layers
in a subclassedtf.keras.Model
class. See here for more details
 Using
Breaking Chances
 If you're opening empty variable scopes; replace
variable_scope
('', ...) byvariable_scope
(tf.get_variable_scope()
, ...).
Bug Fixes and Other Changes
tf.data
: The
DatasetBase::DebugString()
method is nowconst
.  Added the
tf.contrib.data.sample_from_datasets()
API for randomly sampling from multiple datasets.
 The
 Eager Execution:
tf.keras
: Move Keras code out of _impl folder and remove API files.
tf.keras.Model.save_weights
now saves in TensorFlow format by default. Enable dataset iterators to be passed to
tf.keras.Model
training/eval methods.
 Accelerated Linear Algebra (XLA):
 TensorFlow Debugger (tfdbg) CLI:
tf.contrib
: Add
tf.contrib.data.choose_from_datasets()
. tf.contrib.data.make_csv_dataset()
now supports line breaks in quoted strings. Two arguments were removed frommake_csv_dataset
.tf.contrib.framework.zero_initializer
supports ResourceVariable. Adding "constrained_optimization" to tensorflow/contrib.
 Add
 Other:
 Add GCS Configuration Ops.
 Changing signature of
MakeIterator
to enable propagating error status.  KL divergence for two Dirichlet distributions.
 More consistent GcsFileSystem behavior for certain reads past EOF.
 Update benchmark for tf.scan to match ranges across eager and graph modes.
 Fixed bug in
tf.reduce_prod gradient
for complex dtypes.  Add optional
args
argument toDataset.from_generator()
.  Allow the use of '.' in variables (e.g. "hparams.parse('a.b=1.0')"), which would previously raise an error. This will correspond to an attribute name with an embedded '.' symbol (e.g. 'a.b'), which can only be accessed indirectly (e.g. through getattr and setattr). To set this up the user will first need to explicitly add the variable to the hparam object (e.g. "hparams.add_hparam(name='a.b', value=0.0)").
 Benchmark for tf.scan in graph and eager modes.
 Added complex128 support to FFT, FFT2D, FFT3D, IFFT, IFFT2D, and IFFT3D.
 Making ids unique in
nn.embedding_lookup_sparse
. This helps to reduce RPC calls for looking up the embeddings when there are repeated ids in the batch.  Support indicator column in boosted trees.
 Prevent
tf.gradients()
from backpropagating through integer tensors.  LinearOperator[1D,2D,3D]Circulant added to
tensorflow.linalg
.  Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter now supports arbitrary.
 Added
tf.train.Checkpoint
for reading/writing objectbased checkpoints. Dataset.list_files()
now produces determinstic results whenshuffle=False
or aseed
is passed. Added LinearOperatorKronecker, a densefree implementation of the Kronecker Product.
 Allow LinearOperator to broadcast.
 SavedModelBuilder will now deduplicate asset names that point to files with the same basename and the same contents. Note that this may result in new asset files included in SavedModels in cases where assets with the same name but different contents were previously overwriting each other.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abdullah Alrasheed, Achal Shah, Ad530, ADiegoCAlonso, Aditya Yogi, Ag Ramesh, akindyakov, Andy Kernahan, Anya Petrova, Aurelien Geron, Ben, Ben Barsdell, BhavaniSubramanian, braincodercn, Brett Koonce, Brian Nemsick, Brian Zier, Bryan Heden, candy.dc, cclauss, Clayne Robison, ctiijima, Dalmo Cirne, David Norman, David T.H. Kao, DosLin, ekelsen, Elson Rodriguez, Erik Smistad, Felix Abecassis, Fergal Cotter, fo40225, foo0x29a, Freedom" KoanSin Tan, FréDéRic BranchaudCharron, gdh1995, Geoffrey Irving, Giuseppe, gracehoney, Guido Zuidhof, Guillaume Klein, Guozhong Zhuang, Haggai, Harald Husum, imsheridan, Ivan Zhang, Jan Zikes, Jayaram Bobba, Jesse Benson, Jesse Gumz, Jiajia Li, Jie, jinghuangintel, Jingwen, jjsjann123, Joe Yearsley, Joel Hestness, Joel Shor, josephyearsley, Junpeng Lao, Karol M. Langner, Kb Sriram, krantideep95, Krish Ravindranath, Letian Feng, Loo Rong Jie, Lukas Geiger, Maciej, Mahmoud Abuzaina, ManHyuk, Mark Ryan, mbhuiyan, Michal Turek, Mostafa Alaa, Myungsung Kwak, Nand Dalal, Nehal J Wani, Neil Tenenholtz, ngc92, Nicholas Nadeau, P.Eng., Avs, Niranjan Hasabnis, PHidringer, Paul Van Eck, Peng Yu, Qing Zhao, Qingying Chen, Quanlong, Rajendra Arora, Rholais Lii, rmanyari, Robin Richtsfeld, Russell Klopfer, Sagi, Sam Sendelbach, Sandeep N Gupta, Sandip Giri, Sarah Edkins, Scott Tseng, Sdalbsoo, Sergii Khomenko, Seungwoo Choi (Biggie), Seyed Majid Azimi, Shaoning Zeng, shengfuintel, Siu Kei, Muk, Smit Shilu, soonson, Stefan Schweter, Sukhwan Kim, Sunitha Kambhampati, Taehoon Lee, tamimaddari82, Tang, Wenyi, Ted Chang, u2takey, Utkarsh Upadhyay, Vadim Markovtsev, voegtlel, Wai Hon Law, wangsiyu, Wenhao Hu, wenhao.hu, William D. Irons, Yan Facai (颜发才), Yanbo Liang, Yihong Wang, Yilei (Dolee) Yang, Yong Tang, Yuan (Terry) Tang
TensorFlow 1.7.1
annarev released this
Release 1.7.1
Bug Fixes and Other Changes
 Fixes the following potential security vulnerabilities:
 The TensorFlow Lite TOCO compiler did not perform correct boundary checks when reading from some fields within TFLite files.
 The block size in meta file could contain a large int64 value which causes an integer overflow upon addition. Subsequent code using n as index may cause an outofbounds read.
 TensorFlow checkpoint meta file uses Google's https://github.com/google/snappy compression/decompression library. There is a memcpyparamoverlap issue in the version of snappy currently used by TensorFlow.
 A maliciously crafted configuration file passed into the TensorFlow XLA compiler could cause an invalid memory access and/or a heap buffer overflow.
TensorFlow 1.8.0
yifeif released this
Release 1.8.0
Major Features And Improvements
 Can now pass
tf.contrib.distribute.MirroredStrategy()
totf.estimator.RunConfig()
to run an Estimator model on multiple GPUs on one machine.  Add
tf.contrib.data.prefetch_to_device()
, which supports prefetching to GPU memory.  Added Gradient Boosted Trees as premade Estimators: BoostedTreesClassifier, BoostedTreesRegressor.
 Add 3rd generation pipeline config for Cloud TPUs which improves performance and usability.
tf.contrib.bayesflow
is moving out to it's own repo. Added
tf.contrib.{proto,rpc}
to allow generic proto parsing and RPC communication^{1}.
Bug Fixes and Other Changes
tf.data
: Add
tf.contrib.data.prefetch_to_device
, which enables prefetching dataset elements to GPU memory.  Add
tf.contrib.data.AUTOTUNE
, which allows the tf.data runtime to automatically tune the prefetch buffer sizes based on your system and environment.  Add
tf.contrib.data.make_csv_dataset
for building datasets of CSV files.
 Add
 Eager Execution:
 With eager execution Datasets can now be used as standard python iterators (
for batch in dataset:
). BothDataset.__iter__()
andDataset.make_one_shot_iterator()
can now be used to create iterators when eager execution is enabled.  Automatic device placement has been enabled (i.e., use a GPU if available automatically, without requiring an explicit
with tf.device(“/gpu:0”)
) (Fixes #14133) tf.GradientTape
has moved out of contrib.
 With eager execution Datasets can now be used as standard python iterators (
tf.keras
: Added the fashion mnist dataset.
 New data preprocessing functions:
image/random_brightness
,sequence/TimeseriesGenerator
, andtext/hashing_trick
.
 Accelerated Linear Algebra (XLA):
 Select and scatter in reference util and evaluator now use lexicographical order to break ties.
 TensorFlow Debugger (tfdbg) CLI:
 During tensorfilter operations, allow exclusion of nodes by regular expressions.
 Fix spurious background colors in some text terminals.
tf.contrib
: Add metadistribution BatchReshape which reshapes batch dimensions.
tf.contrib.layers.recompute_grad
works for explicit gradient checkpointing on TPU. Add
tf.contrib.framework.argsort
.  Allow
DNNBoostedTreeCombinedEstimator
to work with core versions of feature columns and losses.  Add nonlinear image warping ops:
tf.contrib.image.sparse_image_warp
,tf.contrib.image.dense_image_warp
, andtf.contrib.image.interpolate_spline
.  Fix bug in
tf.contrib.opt.MultitaskOptimizerWrapper
where types of tensors were mismatched.
 Other:
 Lowlevel graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable
TF_C_API_GRAPH_CONSTRUCTION=0
in this release. Future releases will remove the ability to disable this change. Please file a bug if you find yourself using this escape hatch.  Add description of shapes and a pointer to tutorial notebook in
tf.distributions.Distribution
.  Update scatter operations:
 Add
tf.scatter_min
andtf.scatter_max
 Extend scatter operations to work with a scalar update parameter.
 Add
 Move cuDNN RNN ops to core for use in TensorFlow codebase only.
 Add
float64
support forConv2d
,Conv2dBackpropInput
, andConv2dBackpropFilter
.  Add
float64
support forAvgPool
/AvgPoolGrad
.  Make graph name scope thread local so that they work correctly in multithreaded environments.
 Update nsync synchronization library to avoid slow primitives on Linux.
 Removed need to put nsync/public on C include path when building custom ops.
 Add
tf.image.psnr
,tf.image.ssim
,tf.image.ssim_multiscale
,tf.image.image_gradients
,tf.image.sobel_edges
.  Add links to https://js.tensorflow.org.
 Fix nonuniformity of orthogonal matrices.
 Fix bug where multiimage Estimator eval summaries were not displayed correctly.
 Lowlevel graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable
^{1} The cancellation logic of the RPC op contains a concurrency error. A fix has been submitted to master and will be part of the next release.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Aghasy, Alan Du, Alan Lee, Alan Yee, Alex Wiltschko, Animesh Karnewar, Ankit Gupta, Anton Matosov, Aris L, Ben Barsdell, Brent Yi, Brett Koonce, Carl Thomé, cbockman, Chikanaga Tomoyuki, Chris Tava, CéDric Deltheil, Dahan Gong, Dalmo Cirne, Daniel Erenrich, David Norman, DavidNorman, Edd WilderJames, Fanjin Zeng, Felix Abecassis, fo40225, George Sterpu, Giovanni Terlingen, Gor Baghdasaryan, Guillaume Klein, Hanchen Li, Ilya Polenov, Jakub Kolodziejczyk, Jason Sadler, Jayaram Bobba, Jerry Liu, jinghuangintel, Jiongyan Zhang (张炯衍), Joel Shor, Jong Wook Kim, Julian Eisenschlos, Karl Lessard, Krish Ravindranath, Loo Rong Jie, Lukas Geiger, Luke Iwanski, Mahmoud Abuzaina, ManHyuk, Marvin Richter, Maximilian Mitchell, Mohammad Ashraf Bhuiyan, msofka, Mustafa Kasap, Nathan Burnham, Nathan Luehr, Naveen Marri, ngc92, nio1814, Oleg Zabluda, Ou Changkun, Panos Ipeirotis, Paul Van Eck, Peter Lee, Piotr Czapla, qjivy, Rholais Lii, Rodrigo Formigone, Russell Klopfer, ryantimjohn, Sang Han, SebastiáN RamíRez, shengfuintel, Siby Jose Plathottam, Silver Chan, Stanislaw Antol, Taehoon Lee, Tarang Chugh, Ted Chang, Thomas Bastiani, Xian Xu, Xiaoming (Jason) Cui, Yan Facai (颜发才), yaox12, Yashal Shakti Kanungo, Yong Tang, Yuan (Terry) Tang, Yuxin Wu, Ziyue(Louis) Lu
TensorFlow 1.8.0rc1
yifeif released this
Release 1.8.0
Major Features And Improvements
 Can now pass
tf.contrib.distribute.MirroredStrategy()
totf.estimator.RunConfig()
to run an Estimator model on multiple GPUs on one machine.  Add
tf.contrib.data.prefetch_to_device()
, which supports prefetching to GPU memory.  Added Gradient Boosted Trees as premade Estimators: BoostedTreesClassifier, BoostedTreesRegressor.
 Add 3rd generation pipeline config for Cloud TPUs which improves performance and usability.
tf.contrib.bayesflow
is moving out to it's own repo. Added
tf.contrib.{proto,rpc}
to allow generic proto parsing and RPC communication.
Bug Fixes and Other Changes
tf.data
: Add
tf.contrib.data.prefetch_to_device
, which enables prefetching dataset elements to GPU memory.  Add
tf.contrib.data.AUTOTUNE
, which allows the tf.data runtime to automatically tune the prefetch buffer sizes based on your system and environment.  Add
tf.contrib.data.make_csv_dataset
for building datasets of CSV files.
 Add
 Eager Execution:
 With eager execution Datasets can now be used as standard python iterators (
for batch in dataset:
). BothDataset.__iter__()
andDataset.make_one_shot_iterator()
can now be used to create iterators when eager execution is enabled.  Automatic device placement has been enabled (i.e., use a GPU if available automatically, without requiring an explicit
with tf.device(“/gpu:0”)
) (Fixes #14133) tf.GradientTape
has moved out of contrib.
 With eager execution Datasets can now be used as standard python iterators (
tf.keras
: Added the fashion mnist dataset.
 New data preprocessing functions:
image/random_brightness
,sequence/TimeseriesGenerator
, andtext/hashing_trick
.
 Accelerated Linear Algebra (XLA):
 Select and scatter in reference util and evaluator now use lexicographical order to break ties.
 TensorFlow Debugger (tfdbg) CLI:
 During tensorfilter operations, allow exclusion of nodes by regular expressions.
 Fix spurious background colors in some text terminals.
tf.contrib
: Add metadistribution BatchReshape which reshapes batch dimensions.
tf.contrib.layers.recompute_grad
works for explicit gradient checkpointing on TPU. Add
tf.contrib.framework.argsort
.  Allow
DNNBoostedTreeCombinedEstimator
to work with core versions of feature columns and losses.  Add nonlinear image warping ops:
tf.contrib.image.sparse_image_warp
,tf.contrib.image.dense_image_warp
, andtf.contrib.image.interpolate_spline
.  Fix bug in
tf.contrib.opt.MultitaskOptimizerWrapper
where types of tensors were mismatched.
 Other:
 Lowlevel graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable
TF_C_API_GRAPH_CONSTRUCTION=0
in this release. Future releases will remove the ability to disable this change. Please file a bug if you find yourself using this escape hatch.  Add description of shapes and a pointer to tutorial notebook in
tf.distributions.Distribution
.  Update scatter operations:
 Add
tf.scatter_min
andtf.scatter_max
 Extend scatter operations to work with a scalar update parameter.
 Add
 Move cuDNN RNN ops to core for use in TensorFlow codebase only.
 Add
float64
support forConv2d
,Conv2dBackpropInput
, andConv2dBackpropFilter
.  Add
float64
support forAvgPool
/AvgPoolGrad
.  Make graph name scope thread local so that they work correctly in multithreaded environments.
 Update nsync synchronization library to avoid slow primitives on Linux.
 Removed need to put nsync/public on C include path when building custom ops.
 Add
tf.image.psnr
,tf.image.ssim
,tf.image.ssim_multiscale
,tf.image.image_gradients
,tf.image.sobel_edges
.  Add links to https://js.tensorflow.org.
 Fix nonuniformity of orthogonal matrices.
 Fix bug where multiimage Estimator eval summaries were not displayed correctly.
 Lowlevel graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Aghasy, Alan Du, Alan Lee, Alan Yee, Alex Wiltschko, Animesh Karnewar, Ankit Gupta, Anton Matosov, Aris L, Ben Barsdell, Brent Yi, Brett Koonce, Carl Thomé, cbockman, Chikanaga Tomoyuki, Chris Tava, CéDric Deltheil, Dahan Gong, Dalmo Cirne, Daniel Erenrich, David Norman, DavidNorman, Edd WilderJames, Fanjin Zeng, Felix Abecassis, fo40225, George Sterpu, Giovanni Terlingen, Gor Baghdasaryan, Guillaume Klein, Hanchen Li, Ilya Polenov, Jakub Kolodziejczyk, Jason Sadler, Jayaram Bobba, Jerry Liu, jinghuangintel, Jiongyan Zhang (张炯衍), Joel Shor, Jong Wook Kim, Julian Eisenschlos, Karl Lessard, Krish Ravindranath, Loo Rong Jie, Lukas Geiger, Luke Iwanski, Mahmoud Abuzaina, ManHyuk, Marvin Richter, Maximilian Mitchell, Mohammad Ashraf Bhuiyan, msofka, Mustafa Kasap, Nathan Burnham, Nathan Luehr, Naveen Marri, ngc92, nio1814, Oleg Zabluda, Ou Changkun, Panos Ipeirotis, Paul Van Eck, Peter Lee, Piotr Czapla, qjivy, Rholais Lii, Rodrigo Formigone, Russell Klopfer, ryantimjohn, Sang Han, SebastiáN RamíRez, shengfuintel, Siby Jose Plathottam, Silver Chan, Stanislaw Antol, Taehoon Lee, Tarang Chugh, Ted Chang, Thomas Bastiani, Xian Xu, Xiaoming (Jason) Cui, Yan Facai (颜发才), yaox12, Yashal Shakti Kanungo, Yong Tang, Yuan (Terry) Tang, Yuxin Wu, Ziyue(Louis) Lu
TensorFlow 1.8.0rc0
annarev released this
Release 1.8.0
Major Features And Improvements
 Can now pass
tf.contrib.distribute.MirroredStrategy()
totf.estimator.RunConfig()
to run an Estimator model on multiple GPUs on one machine.  Add
tf.contrib.data.prefetch_to_device()
, which supports prefetching to GPU memory.  Added Gradient Boosted Trees as premade Estimators: BoostedTreesClassifier, BoostedTreesRegressor.
 Add 3rd generation pipeline config for Cloud TPUs which improves performance and usability.
tf.contrib.bayesflow
is moving out to it's own repo. Added
tf.contrib.{proto,rpc}
to allow generic proto parsing and RPC communication.
Bug Fixes and Other Changes
tf.data
: Add
tf.contrib.data.prefetch_to_device
, which enables prefetching dataset elements to GPU memory.  Add
tf.contrib.data.AUTOTUNE
, which allows the tf.data runtime to automatically tune the prefetch buffer sizes based on your system and environment.  Add
tf.contrib.data.make_csv_dataset
for building datasets of CSV files.
 Add
 Eager Execution:
 With eager execution Datasets can now be used as standard python iterators (
for batch in dataset:
). BothDataset.__iter__()
andDataset.make_one_shot_iterator()
can now be used to create iterators when eager execution is enabled.  Automatic device placement has been enabled (i.e., use a GPU if available automatically, without requiring an explicit
with tf.device(“/gpu:0”)
) (Fixes #14133) tf.GradientTape
has moved out of contrib.
 With eager execution Datasets can now be used as standard python iterators (
tf.keras
: Added the fashion mnist dataset.
 New data preprocessing functions:
image/random_brightness
,sequence/TimeseriesGenerator
, andtext/hashing_trick
.
 Accelerated Linear Algebra (XLA):
 Select and scatter in reference util and evaluator now use lexicographical order to break ties.
 TensorFlow Debugger (tfdbg) CLI:
 During tensorfilter operations, allow exclusion of nodes by regular expressions.
 Fix spurious background colors in some text terminals.
tf.contrib
: Add metadistribution BatchReshape which reshapes batch dimensions.
tf.contrib.layers.recompute_grad
works for explicit gradient checkpointing on TPU. Add
tf.contrib.framework.argsort
.  Allow
DNNBoostedTreeCombinedEstimator
to work with core versions of feature columns and losses.  Add nonlinear image warping ops:
tf.contrib.image.sparse_image_warp
,tf.contrib.image.dense_image_warp
, andtf.contrib.image.interpolate_spline
.  Fix bug in
tf.contrib.opt.MultitaskOptimizerWrapper
where types of tensors were mismatched.
 Other:
 Lowlevel graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable
TF_C_API_GRAPH_CONSTRUCTION=0
in this release. Future releases will remove the ability to disable this change. Please file a bug if you find yourself using this escape hatch.  Add description of shapes and a pointer to tutorial notebook in
tf.distributions.Distribution
.  Update scatter operations:
 Add
tf.scatter_min
andtf.scatter_max
 Extend scatter operations to work with a scalar update parameter.
 Add
 Move cuDNN RNN ops to core for use in TensorFlow codebase only.
 Add
float64
support forConv2d
,Conv2dBackpropInput
, andConv2dBackpropFilter
.  Add
float64
support forAvgPool
/AvgPoolGrad
.  Make graph name scope thread local so that they work correctly in multithreaded environments.
 Update nsync synchronization library to avoid slow primitives on Linux.
 Removed need to put nsync/public on C include path when building custom ops.
 Add
tf.image.psnr
,tf.image.ssim
,tf.image.ssim_multiscale
,tf.image.image_gradients
,tf.image.sobel_edges
.  Add links to https://js.tensorflow.org.
 Fix nonuniformity of orthogonal matrices.
 Fix bug where multiimage Estimator eval summaries were not displayed correctly.
 Lowlevel graph construction now calls the TensorFlow C API. This change should be invisible to most users, but can be disabled by setting the environment variable
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Aghasy, Alan Du, Alan Lee, Alan Yee, Alex Wiltschko, Animesh Karnewar, Ankit Gupta, Anton Matosov, Aris L, Ben Barsdell, Brent Yi, Brett Koonce, Carl Thomé, cbockman, Chikanaga Tomoyuki, Chris Tava, CéDric Deltheil, Dahan Gong, Dalmo Cirne, Daniel Erenrich, David Norman, DavidNorman, Edd WilderJames, Fanjin Zeng, Felix Abecassis, fo40225, George Sterpu, Giovanni Terlingen, Gor Baghdasaryan, Guillaume Klein, Hanchen Li, Ilya Polenov, Jakub Kolodziejczyk, Jason Sadler, Jayaram Bobba, Jerry Liu, jinghuangintel, Jiongyan Zhang (张炯衍), Joel Shor, Jong Wook Kim, Julian Eisenschlos, Karl Lessard, Krish Ravindranath, Loo Rong Jie, Lukas Geiger, Luke Iwanski, Mahmoud Abuzaina, ManHyuk, Marvin Richter, Maximilian Mitchell, Mohammad Ashraf Bhuiyan, msofka, Mustafa Kasap, Nathan Burnham, Nathan Luehr, Naveen Marri, ngc92, nio1814, Oleg Zabluda, Ou Changkun, Panos Ipeirotis, Paul Van Eck, Peter Lee, Piotr Czapla, qjivy, Rholais Lii, Rodrigo Formigone, Russell Klopfer, ryantimjohn, Sang Han, SebastiáN RamíRez, shengfuintel, Siby Jose Plathottam, Silver Chan, Stanislaw Antol, Taehoon Lee, Tarang Chugh, Ted Chang, Thomas Bastiani, Xian Xu, Xiaoming (Jason) Cui, Yan Facai (颜发才), yaox12, Yashal Shakti Kanungo, Yong Tang, Yuan (Terry) Tang, Yuxin Wu, Ziyue(Louis) Lu
TensorFlow 1.7.0
annarev released this
Release 1.7.0
Major Features And Improvements
 Eager mode is moving out of contrib, try
tf.enable_eager_execution()
.  Graph rewrites emulating fixedpoint quantization compatible with TensorFlow Lite, supported by new
tf.contrib.quantize
package.  Easily customize gradient computation with
tf.custom_gradient
.  TensorBoard Debugger Plugin, the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha.
 Experimental support for reading a sqlite database as a
Dataset
with newtf.contrib.data.SqlDataset
.  Distributed Mutex / CriticalSection added to
tf.contrib.framework.CriticalSection
.  Better text processing with
tf.regex_replace
.  Easy, efficient sequence input with
tf.contrib.data.bucket_by_sequence_length
Bug Fixes and Other Changes
 Accelerated Linear Algebra (XLA):
 Add
MaxPoolGradGrad
support for XLA  CSE pass from Tensorflow is now disabled in XLA.
 Add
tf.data
:tf.data.Dataset
 Add support for building C++ Dataset op kernels as external libraries, using the
tf.load_op_library()
mechanism. Dataset.list_files()
now shuffles its output by default.Dataset.shuffle(..., seed=tf.constant(0, dtype=tf.int64))
now yields the same sequence of elements asDataset.shuffle(..., seed=0)
.
 Add support for building C++ Dataset op kernels as external libraries, using the
 Add
num_parallel_reads
argument totf.data.TFRecordDataset
.
tf.contrib
:tf.contrib.bayesflow.halton_sequence
now supports randomization. Add support for scalars in
tf.contrib.all_reduce
.  Add
effective_sample_size
totf.contrib.bayesflow.mcmc_diagnostics
.  Add
potential_scale_reduction
totf.contrib.bayesflow.mcmc_diagnostics
.  Add
BatchNormalization
,Kumaraswamy
bijectors.  Deprecate
tf.contrib.learn
. Please check contrib/learn/README.md for instructions on how to convert existing code. tf.contrib.data
 Remove deprecated
tf.contrib.data.Dataset
,tf.contrib.data.Iterator
,tf.contrib.data.FixedLengthRecordDataset
,tf.contrib.data.TextLineDataset
, andtf.contrib.data.TFRecordDataset
classes.  Added
bucket_by_sequence_length
,sliding_window_batch
, andmake_batched_features_dataset
 Remove deprecated
 Remove unmaintained
tf.contrib.ndlstm
. You can find it externally at https://github.com/tmbarchive/tfndlstm.  Moved most of
tf.contrib.bayesflow
to its own repo:tfp
 Other:
 tf.py_func now reports the full stack trace if an exception occurs.
 Integrate
TPUClusterResolver
with GKE's integration for Cloud TPUs.  Add a library for statistical testing of samplers.
 Add Helpers to stream data from the GCE VM to a Cloud TPU.
 Integrate ClusterResolvers with TPUEstimator.
 Unify metropolis_hastings interface with HMC kernel.
 Move LIBXSMM convolutions to a separate define flag so that they are disabled by default.
 Fix
MomentumOptimizer
lambda.  Reduce
tfp.layers
boilerplate via programmable docstrings.  Add
auc_with_confidence_intervals
, a method for computing the AUC and confidence interval with linearithmic time complexity. regression_head
now accepts customized link function, to satisfy the usage that user can define their own link function if thearray_ops.identity
does not meet the requirement. Fix
initialized_value
andinitial_value
behaviors forResourceVariables
created fromVariableDef
protos.  Add TensorSpec to represent the specification of Tensors.
 Constant folding pass is now deterministic.
 Support
float16
dtype
intf.linalg.*
.  Add
tf.estimator.export.TensorServingInputReceiver
that allowstf.estimator.Estimator.export_savedmodel
to pass raw tensors to model functions.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Abe, Alistair Low, Andy Kernahan, Appledore, Ben, Ben Barsdell, Boris Pfahringer, Brad Wannow, Brett Koonce, Carl Thomé, cclauss, Chengzhi Chen, Chris Drake, Christopher Yeh, Clayne Robison, Codrut Grosu, Daniel Trebbien, Danny Goodman, David Goodwin, David Norman, Deron Eriksson, Donggeon Lim, Donny Viszneki, DosLin, DylanDmitri, Francisco Guerrero, Fred Reiss, gdh1995, Giuseppe, Glenn Weidner, gracehoney, Guozhong Zhuang, Haichen "Hc" Li, Harald Husum, harumitsu.nobuta, Henry Spivey, hsm207, Jekyll Song, Jerome, Jiongyan Zhang, jjsjann123, John Sungjin Park, Johnson145, JoshVarty, Julian Wolff, Jun Wang, JuneOne, Kamil Sindi, Kb Sriram, KdavisMozilla, Kenji, lazypanda1, LiangChi Hsieh, Loo Rong Jie, Mahesh Bhosale, MandarJKulkarni, ManHyuk, Marcus Ong, Marshal Hayes, Martin Pool, matthieudelaro, mdfaijul, mholzel, Michael Zhou, Ming Li, Minmin Sun, Myungjoo Ham, MyungsungKwak, Naman Kamra, Peng Yu, Penghao Cen, Phil, RaghuramanK, resec, Rohin Mohanadas, Sandeep N Gupta, Scott Tseng, seaotterman, Seo Sanghyeon, Sergei Lebedev, Ted Chang, terrytangyuan, Tim H, tkunic, Tod, vihanjain, Yan Facai (颜发才), Yin Li, Yong Tang, Yukun Chen, Yusuke Yamada
TensorFlow 1.7.0rc1
annarev released this
Release 1.7.0
Major Features And Improvements
 Eager execution is moving out of contrib, try
tf.enable_eager_execution()
.  Graph rewrites emulating fixedpoint quantization compatible with TensorFlow Lite, supported by new
tf.contrib.quantize
package.  Easily customize gradient computation with
tf.custom_gradient
.  TensorBoard Debugger Plugin, the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha.
 Experimental support for reading a sqlite database as a
Dataset
with newtf.contrib.data.SqlDataset
.  Distributed Mutex / CriticalSection added to
tf.contrib.framework.CriticalSection
.  Better text processing with
tf.regex_replace
.  Easy, efficient sequence input with
tf.contrib.data.bucket_by_sequence_length
Bug Fixes and Other Changes
 Accelerated Linear Algebra (XLA):
 Add
MaxPoolGradGrad
support for XLA  CSE pass from Tensorflow is now disabled in XLA.
 Add
tf.data
:tf.data.Dataset
 Add support for building C++ Dataset op kernels as external libraries, using the
tf.load_op_library()
mechanism. Dataset.list_files()
now shuffles its output by default.Dataset.shuffle(..., seed=tf.constant(0, dtype=tf.int64))
now yields the same sequence of elements asDataset.shuffle(..., seed=0)
.
 Add support for building C++ Dataset op kernels as external libraries, using the
 Add
num_parallel_reads
argument totf.data.TFRecordDataset
.
tf.contrib
:tf.contrib.bayesflow.halton_sequence
now supports randomization. Add support for scalars in
tf.contrib.all_reduce
.  Add
effective_sample_size
totf.contrib.bayesflow.mcmc_diagnostics
.  Add
potential_scale_reduction
totf.contrib.bayesflow.mcmc_diagnostics
.  Add
BatchNormalization
,Kumaraswamy
bijectors.  Deprecate
tf.contrib.learn
. Please check contrib/learn/README.md for instructions on how to convert existing code. tf.contrib.data
 Remove deprecated
tf.contrib.data.Dataset
,tf.contrib.data.Iterator
,tf.contrib.data.FixedLengthRecordDataset
,tf.contrib.data.TextLineDataset
, andtf.contrib.data.TFRecordDataset
classes.  Added
bucket_by_sequence_length
,sliding_window_batch
, andmake_batched_features_dataset
 Remove deprecated
 Remove unmaintained
tf.contrib.ndlstm
. You can find it externally at https://github.com/tmbarchive/tfndlstm.  Moved most of
tf.contrib.bayesflow
to its own repo:tfp
 Other:
 tf.py_func now reports the full stack trace if an exception occurs.
 Integrate
TPUClusterResolver
with GKE's integration for Cloud TPUs.  Add a library for statistical testing of samplers.
 Add Helpers to stream data from the GCE VM to a Cloud TPU.
 Integrate ClusterResolvers with TPUEstimator.
 Unify metropolis_hastings interface with HMC kernel.
 Move LIBXSMM convolutions to a separate define flag so that they are disabled by default.
 Fix
MomentumOptimizer
lambda.  Reduce
tfp.layers
boilerplate via programmable docstrings.  Add
auc_with_confidence_intervals
, a method for computing the AUC and confidence interval with linearithmic time complexity. regression_head
now accepts customized link function, to satisfy the usage that user can define their own link function if thearray_ops.identity
does not meet the requirement. Fix
initialized_value
andinitial_value
behaviors forResourceVariables
created fromVariableDef
protos.  Add TensorSpec to represent the specification of Tensors.
 Constant folding pass is now deterministic.
 Support
float16
dtype
intf.linalg.*
.  Add
tf.estimator.export.TensorServingInputReceiver
that allowstf.estimator.Estimator.export_savedmodel
to pass raw tensors to model functions.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Abe, Alistair Low, Andy Kernahan, Appledore, Ben, Ben Barsdell, Boris Pfahringer, Brad Wannow, Brett Koonce, Carl Thomé, cclauss, Chengzhi Chen, Chris Drake, Christopher Yeh, Clayne Robison, Codrut Grosu, Daniel Trebbien, Danny Goodman, David Goodwin, David Norman, Deron Eriksson, Donggeon Lim, Donny Viszneki, DosLin, DylanDmitri, Francisco Guerrero, Fred Reiss, gdh1995, Giuseppe, Glenn Weidner, gracehoney, Guozhong Zhuang, Haichen "Hc" Li, Harald Husum, harumitsu.nobuta, Henry Spivey, hsm207, Jekyll Song, Jerome, Jiongyan Zhang, jjsjann123, John Sungjin Park, Johnson145, JoshVarty, Julian Wolff, Jun Wang, JuneOne, Kamil Sindi, Kb Sriram, KdavisMozilla, Kenji, lazypanda1, LiangChi Hsieh, Loo Rong Jie, Mahesh Bhosale, MandarJKulkarni, ManHyuk, Marcus Ong, Marshal Hayes, Martin Pool, matthieudelaro, mdfaijul, mholzel, Michael Zhou, Ming Li, Minmin Sun, Myungjoo Ham, MyungsungKwak, Naman Kamra, Peng Yu, Penghao Cen, Phil, RaghuramanK, resec, Rohin Mohanadas, Sandeep N Gupta, Scott Tseng, seaotterman, Seo Sanghyeon, Sergei Lebedev, Ted Chang, terrytangyuan, Tim H, tkunic, Tod, vihanjain, Yan Facai (颜发才), Yin Li, Yong Tang, Yukun Chen, Yusuke Yamada
TensorFlow 1.5.1
av8ramit released this
Release 1.5.1
Bug Fixes and Other Changes
 Fixes a potential security vulnerability where onthefly changes to the dtype of a tensor reference may lead to undefined behavior.
TensorFlow 1.7.0rc0
yifeif released this
Release 1.7.0
Major Features And Improvements
 Eager mode is moving out of contrib, try
tf.enable_eager_execution()
.  Graph rewrites emulating fixedpoint quantization compatible with TensorFlow Lite, supported by new
tf.contrib.quantize
package.  Easily customize gradient computation with
tf.custom_gradient
.  TensorBoard Debugger Plugin, the graphical user interface (GUI) of TensorFlow Debugger (tfdbg), is now in alpha.
 Experimental support for reading a sqlite database as a
Dataset
with newtf.contrib.data.SqlDataset
.  Distributed Mutex / CriticalSection added to
tf.contrib.framework.CriticalSection
.  Better text processing with
tf.regex_replace
.  Easy, efficient sequence input with
tf.contrib.data.bucket_by_sequence_length
Bug Fixes and Other Changes
 Accelerated Linear Algebra (XLA):
 Add
MaxPoolGradGrad
support for XLA  CSE pass from Tensorflow is now disabled in XLA.
 Add
tf.data
:tf.data.Dataset
 Add support for building C++ Dataset op kernels as external libraries, using the
tf.load_op_library()
mechanism. Dataset.list_files()
now shuffles its output by default.Dataset.shuffle(..., seed=tf.constant(0, dtype=tf.int64))
now yields the same sequence of elements asDataset.shuffle(..., seed=0)
.
 Add support for building C++ Dataset op kernels as external libraries, using the
 Add
num_parallel_reads
argument totf.data.TFRecordDataset
.
tf.contrib
:tf.contrib.bayesflow.halton_sequence
now supports randomization. Add support for scalars in
tf.contrib.all_reduce
.  Add
effective_sample_size
totf.contrib.bayesflow.mcmc_diagnostics
.  Add
potential_scale_reduction
totf.contrib.bayesflow.mcmc_diagnostics
.  Add
BatchNormalization
,Kumaraswamy
bijectors.  Deprecate
tf.contrib.learn
. Please check contrib/learn/README.md for instructions on how to convert existing code. tf.contrib.data
 Remove deprecated
tf.contrib.data.Dataset
,tf.contrib.data.Iterator
,tf.contrib.data.FixedLengthRecordDataset
,tf.contrib.data.TextLineDataset
, andtf.contrib.data.TFRecordDataset
classes.  Added
bucket_by_sequence_length
,sliding_window_batch
, andmake_batched_features_dataset
 Remove deprecated
 Remove unmaintained
tf.contrib.ndlstm
. You can find it externally at https://github.com/tmbarchive/tfndlstm.  Moved most of
tf.contrib.bayesflow
to its own repo:tfp
 Other:
 tf.py_func now reports the full stack trace if an exception occurs.
 Integrate
TPUClusterResolver
with GKE's integration for Cloud TPUs.  Add a library for statistical testing of samplers.
 Add Helpers to stream data from the GCE VM to a Cloud TPU.
 Integrate ClusterResolvers with TPUEstimator.
 Unify metropolis_hastings interface with HMC kernel.
 Move LIBXSMM convolutions to a separate define flag so that they are disabled by default.
 Fix
MomentumOptimizer
lambda.  Reduce
tfp.layers
boilerplate via programmable docstrings.  Add
auc_with_confidence_intervals
, a method for computing the AUC and confidence interval with linearithmic time complexity. regression_head
now accepts customized link function, to satisfy the usage that user can define their own link function if thearray_ops.identity
does not meet the requirement. Fix
initialized_value
andinitial_value
behaviors forResourceVariables
created fromVariableDef
protos.  Add TensorSpec to represent the specification of Tensors.
 Constant folding pass is now deterministic.
 Support
float16
dtype
intf.linalg.*
.  Add
tf.estimator.export.TensorServingInputReceiver
that allowstf.estimator.Estimator.export_savedmodel
to pass raw tensors to model functions.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Abe, Alistair Low, Andy Kernahan, Appledore, Ben, Ben Barsdell, Boris Pfahringer, Brad Wannow, Brett Koonce, Carl Thomé, cclauss, Chengzhi Chen, Chris Drake, Christopher Yeh, Clayne Robison, Codrut Grosu, Daniel Trebbien, Danny Goodman, David Goodwin, David Norman, Deron Eriksson, Donggeon Lim, Donny Viszneki, DosLin, DylanDmitri, Francisco Guerrero, Fred Reiss, gdh1995, Giuseppe, Glenn Weidner, gracehoney, Guozhong Zhuang, Haichen "Hc" Li, Harald Husum, harumitsu.nobuta, Henry Spivey, hsm207, Jekyll Song, Jerome, Jiongyan Zhang, jjsjann123, John Sungjin Park, Johnson145, JoshVarty, Julian Wolff, Jun Wang, JuneOne, Kamil Sindi, Kb Sriram, KdavisMozilla, Kenji, lazypanda1, LiangChi Hsieh, Loo Rong Jie, Mahesh Bhosale, MandarJKulkarni, ManHyuk, Marcus Ong, Marshal Hayes, Martin Pool, matthieudelaro, mdfaijul, mholzel, Michael Zhou, Ming Li, Minmin Sun, Myungjoo Ham, MyungsungKwak, Naman Kamra, Peng Yu, Penghao Cen, Phil, RaghuramanK, resec, Rohin Mohanadas, Sandeep N Gupta, Scott Tseng, seaotterman, Seo Sanghyeon, Sergei Lebedev, Ted Chang, terrytangyuan, Tim H, tkunic, Tod, vihanjain, Yan Facai (颜发才), Yin Li, Yong Tang, Yukun Chen, Yusuke Yamada
TensorFlow 1.6.0
gunan released this
Release 1.6.0
Breaking Changes
 Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
 Prebuilt binaries will use AVX instructions. This may break TF on older CPUs.
Major Features And Improvements
 New Optimizer internal API for nonslot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated.
tf.estimator.{FinalExporter,LatestExporter}
now export stripped SavedModels. This improves forward compatibility of the SavedModel. FFT support added to XLA CPU/GPU.
 Android TF can now be built with CUDA acceleration on compatible Tegra devices (see contrib/makefile/README.md for more information)
Bug Fixes and Other Changes
 Documentation updates:
 Added a second version of Getting Started, which is aimed at ML
newcomers.  Clarified documentation on
resize_images.align_corners
parameter.  Additional documentation for TPUs.
 Added a second version of Getting Started, which is aimed at ML
 Google Cloud Storage (GCS):
 Add clientside throttle.
 Add a
FlushCaches()
method to the FileSystem interface, with an implementation for GcsFileSystem.
 Other:
 Add
tf.contrib.distributions.Kumaraswamy
. RetryingFileSystem::FlushCaches()
calls the base FileSystem'sFlushCaches()
. Add auto_correlation to distributions.
 Add
tf.contrib.distributions.Autoregressive
.  Add SeparableConv1D layer.
 Add convolutional Flipout layers.
 When both inputs of
tf.matmul
are bfloat16, it returns bfloat16, instead of float32.  Added
tf.contrib.image.connected_components
.  Add
tf.contrib.framework.CriticalSection
that allows atomic variable access.  Output variance over trees predictions for classifications tasks.
 For
pt
andeval
commands, allow writing tensor values to filesystem as numpy files.  gRPC: Propagate truncated errors (instead of returning gRPC internal error).
 Augment parallel_interleave to support 2 kinds of prefetching.
 Improved XLA support for C64related ops log, pow, atan2, tanh.
 Add probabilistic convolutional layers.
 Add
API Changes
 Introducing prepare_variance boolean with default setting to False for backward compatibility.
 Move
layers_dense_variational_impl.py
tolayers_dense_variational.py
.
Known Bugs

Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
CUDA_ILLEGAL_ADDRESS
failures.Google discovered in midDecember 2017 that the PTXtoSASS compiler in CUDA 9
and CUDA 9.1 sometimes does not properly compute the carry bit when
decomposing 64bit address calculations with large offsets (e.g.load [x + large_constant]
) into 32bit arithmetic in SASS.As a result, these versions of
ptxas
miscompile most XLA programs which use
more than 4GB of temp memory. This results in garbage results and/or
CUDA_ERROR_ILLEGAL_ADDRESS
failures.A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
downgrade to CUDA 8.0.x
or disable XLA:GPU.TensorFlow will print a warning if you use XLA:GPU with a knownbad version of
CUDA; see e00ba24. 
The
tensorboard
command or module may appear to be missing after certain
upgrade flows. This is due to pip package conflicts as a result of changing
the TensorBoard package name. See the TensorBoard 1.6.0 release notes for a fix.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen Goodman,
amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony Platanios,
Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla, Ben, Bhavani Subramanian,
Brett Koonce, Carl Thomé, cclauss, Cesc, Changming Sun, Christoph Boeddeker, Clayne Robison,
Clemens Schulz, Clint (Woonhyuk Baek), codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien,
Daniel Ylitalo, Daniel Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, DongJian,
dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d, Eunji Jeong, fanlu,
Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong Zhuang, Hanchen Li, hsm207, hyunyoung2,
ImSheridan, Ishant Mrinal Haloi, Jacky Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse
Kinkead, jfaath, Jian Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan,
Julian Niedermeier, Julian Wolff, JxKing, KWW, Karl Lessard, Kasper Marstal, Keiji Ariyama,
KoanSin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn Jackson, ManHyuk, Matt Basta,
Matt Smith, Matthew Schulkind, Michael, michaelkhan3, Miguel Piedrafita, Mikalai Drabovich,
Mike Knapp, mjwen, mktozk, Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla,
NamrataIbm, Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge
Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł Kapica, Peng Yu,
Philip Yang, Pierre Blondeau, PoHsien Chu, powderluv, Puyu Wang, Rajendra Arora, Rasmus, Renat
Idrisov, resec, Robin Richtsfeld, Ronald Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri,
Santiago Castro, Sayed Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash
Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental, starsblinking, Stephen Lumenta,
Steven Hickson, Su Tang, Taehoon Lee, Takuya Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck,
Timofey Kondrashov, vade, vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal,
Vivek Rane, wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang,
Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武
TensorFlow 1.6.0rc1
gunan released this
Release 1.6.0
Breaking Changes
 Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
 Prebuilt binaries will use AVX instructions. This may break TF on older CPUs.
Major Features And Improvements
 New Optimizer internal API for nonslot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated.
tf.estimator.{FinalExporter,LatestExporter}
now export stripped SavedModels. This improves forward compatibility of the SavedModel. FFT support added to XLA CPU/GPU.
 Android TF can now be built with CUDA acceleration on compatible Tegra devices (see contrib/makefile/README.md for more information)
Bug Fixes and Other Changes
 Documentation updates:
 Added a second version of Getting Started, which is aimed at ML
newcomers.  Clarified documentation on
resize_images.align_corners
parameter.  Additional documentation for TPUs.
 Added a second version of Getting Started, which is aimed at ML
 Google Cloud Storage (GCS):
 Add clientside throttle.
 Add a
FlushCaches()
method to the FileSystem interface, with an implementation for GcsFileSystem.
 Other:
 Add
tf.contrib.distributions.Kumaraswamy
. RetryingFileSystem::FlushCaches()
calls the base FileSystem'sFlushCaches()
. Add auto_correlation to distributions.
 Add
tf.contrib.distributions.Autoregressive
.  Add SeparableConv1D layer.
 Add convolutional Flipout layers.
 When both inputs of
tf.matmul
are bfloat16, it returns bfloat16, instead of float32.  Added
tf.contrib.image.connected_components
.  Add
tf.contrib.framework.CriticalSection
that allows atomic variable access.  Output variance over trees predictions for classifications tasks.
 For
pt
andeval
commands, allow writing tensor values to filesystem as numpy files.  gRPC: Propagate truncated errors (instead of returning gRPC internal error).
 Augment parallel_interleave to support 2 kinds of prefetching.
 Improved XLA support for C64related ops log, pow, atan2, tanh.
 Add probabilistic convolutional layers.
 Add
API Changes
 Introducing prepare_variance boolean with default setting to False for backward compatibility.
 Move
layers_dense_variational_impl.py
tolayers_dense_variational.py
.
Known Bugs

Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
CUDA_ILLEGAL_ADDRESS
failures.Google discovered in midDecember 2017 that the PTXtoSASS compiler in CUDA 9
and CUDA 9.1 sometimes does not properly compute the carry bit when
decomposing 64bit address calculations with large offsets (e.g.load [x + large_constant]
) into 32bit arithmetic in SASS.As a result, these versions of
ptxas
miscompile most XLA programs which use
more than 4GB of temp memory. This results in garbage results and/or
CUDA_ERROR_ILLEGAL_ADDRESS
failures.A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
downgrade to CUDA 8.0.x
or disable XLA:GPU.TensorFlow will print a warning if you use XLA:GPU with a knownbad version of
CUDA; see e00ba24. 
The
tensorboard
command or module may appear to be missing after certain
upgrade flows. This is due to pip package conflicts as a result of changing
the TensorBoard package name. See the TensorBoard 1.6.0 release notes for a fix.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen Goodman,
amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony Platanios,
Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla, Ben, Bhavani Subramanian,
Brett Koonce, Carl Thomé, cclauss, Cesc, Changming Sun, Christoph Boeddeker, Clayne Robison,
Clemens Schulz, Clint (Woonhyuk Baek), codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien,
Daniel Ylitalo, Daniel Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, DongJian,
dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d, Eunji Jeong, fanlu,
Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong Zhuang, Hanchen Li, hsm207, hyunyoung2,
ImSheridan, Ishant Mrinal Haloi, Jacky Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse
Kinkead, jfaath, Jian Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan,
Julian Niedermeier, Julian Wolff, JxKing, KWW, Karl Lessard, Kasper Marstal, Keiji Ariyama,
KoanSin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn Jackson, ManHyuk, Matt Basta,
Matt Smith, Matthew Schulkind, Michael, michaelkhan3, Miguel Piedrafita, Mikalai Drabovich,
Mike Knapp, mjwen, mktozk, Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla,
NamrataIbm, Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge
Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł Kapica, Peng Yu,
Philip Yang, Pierre Blondeau, PoHsien Chu, powderluv, Puyu Wang, Rajendra Arora, Rasmus, Renat
Idrisov, resec, Robin Richtsfeld, Ronald Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri,
Santiago Castro, Sayed Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash
Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental, starsblinking, Stephen Lumenta,
Steven Hickson, Su Tang, Taehoon Lee, Takuya Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck,
Timofey Kondrashov, vade, vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal,
Vivek Rane, wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang,
Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武
TensorFlow 1.6.0rc0
av8ramit released this
Release 1.6.0
Breaking Changes
 Prebuilt binaries are now built against CUDA 9.0 and cuDNN 7.
 Prebuilt binaries will use AVX instructions. This may break TF on older CPUs.
Major Features And Improvements
tf.estimator.{FinalExporter,LatestExporter}
now export stripped SavedModels. This improves forward compatibility of the SavedModel. FFT support added to XLA CPU/GPU.
Bug Fixes and Other Changes
 Documentation updates:
 Added a second version of Getting Started, which is aimed at ML
newcomers.  Clarified documentation on
resize_images.align_corners
parameter.  Additional documentation for TPUs.
 Added a second version of Getting Started, which is aimed at ML
 Google Cloud Storage (GCS):
 Add clientside throttle.
 Add a
FlushCaches()
method to the FileSystem interface, with an implementation for GcsFileSystem.
 Other:
 New Optimizer internal API for nonslot variables. Descendants of AdamOptimizer that access _beta[12]_power will need to be updated.
 Add
tf.contrib.distributions.Kumaraswamy
. RetryingFileSystem::FlushCaches()
calls the base FileSystem'sFlushCaches()
. Add auto_correlation to distributions.
 Add
tf.contrib.distributions.Autoregressive
.  Add SeparableConv1D layer.
 Add convolutional Flipout layers.
 When both inputs of
tf.matmul
are bfloat16, it returns bfloat16, instead of float32.  Added
tf.contrib.image.connected_components
.  Add
tf.contrib.framework.CriticalSection
that allows atomic variable access.  Output variance over trees predictions for classifications tasks.
 For
pt
andeval
commands, allow writing tensor values to filesystem as numpy files.  gRPC: Propagate truncated errors (instead of returning gRPC internal error).
 Augment parallel_interleave to support 2 kinds of prefetching.
 Improved XLA support for C64related ops log, pow, atan2, tanh.
 Add probabilistic convolutional layers.
API Changes
 Introducing prepare_variance boolean with default setting to False for backward compatibility.
 Move
layers_dense_variational_impl.py
tolayers_dense_variational.py
.
Known Bugs

Using XLA:GPU with CUDA 9 and CUDA 9.1 results in garbage results and/or
CUDA_ILLEGAL_ADDRESS
failures.Google discovered in midDecember 2017 that the PTXtoSASS compiler in CUDA 9
and CUDA 9.1 sometimes does not properly compute the carry bit when
decomposing 64bit address calculations with large offsets (e.g.load [x + large_constant]
) into 32bit arithmetic in SASS.As a result, these versions of
ptxas
miscompile most XLA programs which use
more than 4GB of temp memory. This results in garbage results and/or
CUDA_ERROR_ILLEGAL_ADDRESS
failures.A fix in CUDA 9.1.121 is expected in late February 2018. We do not expect a
fix for CUDA 9.0.x. Until the fix is available, the only workaround is to
downgrade to CUDA 8.0.x
or disable XLA:GPU.TensorFlow will print a warning if you use XLA:GPU with a knownbad version of
CUDA; see e00ba24.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Ag Ramesh, Aiden Scandella, Akimasa Kimura, Alex Rothberg, Allen Goodman,
amilioto, Andrei Costinescu, Andrei Nigmatulin, Anjum Sayed, Anthony Platanios,
Anush Elangovan, Armando Fandango, Ashish Kumar Ram, Ashwini Shukla, Ben, Bhavani Subramanian,
Brett Koonce, Carl Thomé, cclauss, Cesc, Changming Sun, Christoph Boeddeker, Clayne Robison,
Clemens Schulz, Clint (Woonhyuk Baek), codrut3, Cole Gerdemann, Colin Raffel, Daniel Trebbien,
Daniel Ylitalo, Daniel Zhang, Daniyar, Darjan Salaj, Dave Maclachlan, David Norman, DongJian,
dongsamb, dssgsra, Edward H, eladweiss, elilienstein, Eric Lilienstein, error.d, Eunji Jeong, fanlu,
Florian Courtial, fo40225, Fred, Gregg Helt, Guozhong Zhuang, Hanchen Li, hsm207, hyunyoung2,
ImSheridan, Ishant Mrinal Haloi, Jacky Ko, Jay Young, Jean Flaherty, Jerome, JerrikEph, Jesse
Kinkead, jfaath, Jian Lin, jinghuangintel, Jiongyan Zhang, Joel Hestness, Joel Shor, Johnny Chan,
Julian Niedermeier, Julian Wolff, JxKing, KWW, Karl Lessard, Kasper Marstal, Keiji Ariyama,
KoanSin Tan, Loki Der Quaeler, Loo Rong Jie, Luke Schaefer, Lynn Jackson, ManHyuk, Matt Basta,
Matt Smith, Matthew Schulkind, Michael, michaelkhan3, Miguel Piedrafita, Mikalai Drabovich,
Mike Knapp, mjwen, mktozk, Mohamed Aly, Mohammad Ashraf Bhuiyan, Myungjoo Ham, Naman Bhalla,
NamrataIbm, Nathan Luehr, nathansilberman, Netzeband, Niranjan Hasabnis, Omar Aflak, Ozge
Yalcinkaya, Parth P Panchal, patrickzzy, Patryk Chrabaszcz, Paul Van Eck, Paweł Kapica, Peng Yu,
Philip Yang, Pierre Blondeau, PoHsien Chu, powderluv, Puyu Wang, Rajendra Arora, Rasmus, Renat
Idrisov, resec, Robin Richtsfeld, Ronald Eddy Jr, Sahil Singh, Sam Matzek, Sami Kama, sandipmgiri,
Santiago Castro, Sayed Hadi Hashemi, Scott Tseng, Sergii Khomenko, Shahid, Shengpeng Liu, Shreyash
Sharma, Shrinidhi Kl, Simone Cirillo, simsicon, Stanislav Levental, starsblinking, Stephen Lumenta,
Steven Hickson, Su Tang, Taehoon Lee, Takuya Wakisaka, Ted Chang, Ted Ying, Tijmen Verhulsdonck,
Timofey Kondrashov, vade, vaibhav, Valentin Khrulkov, vchigrin, Victor Costan, Viraj Navkal,
Vivek Rane, wagonhelm, Yan Facai (颜发才), Yanbo Liang, Yaroslav Bulatov, yegord, Yong Tang,
Yoni Tsafir, yordun, Yuan (Terry) Tang, Yuxin Wu, zhengdi, Zhengsheng Wei, 田传武
TensorFlow 1.5.0rc1
angersson released this
Release 1.5.0
Breaking Changes
 Prebuilt binaries are now built against CUDA 9 and cuDNN 7.
 Our Linux binaries are built using ubuntu 16 containers, potentially introducing glibc incompatibility issues with ubuntu 14.
 Starting from 1.6 release, our prebuilt binaries will use AVX instructions. This may break TF on older CPUs.
Major Features And Improvements
 Eager execution preview version is now available.
 TensorFlow Lite dev preview is now available.
 CUDA 9 and cuDNN 7 support.
 Accelerated Linear Algebra (XLA):
 Add
complex64
support to XLA compiler. bfloat
support is now added to XLA infrastructure. Make
ClusterSpec
propagation work with XLA devices.  Use a determinisitic executor to generate XLA graph.
 Add
tf.contrib
:tf.contrib.distributions
: Add
tf.contrib.distributions.Autoregressive
.  Make
tf.contrib.distributions
QuadratureCompound classes support batch  Infer
tf.contrib.distributions.RelaxedOneHotCategorical
dtype
from arguments.  Make
tf.contrib.distributions
quadrature family parameterized by
quadrature_grid_and_prob
vsquadrature_degree
. auto_correlation
added totf.contrib.distributions
 Add
 Add
tf.contrib.bayesflow.layers
, a collection of probabilistic (neural) layers.  Add
tf.contrib.bayesflow.halton_sequence
.  Add
tf.contrib.data.make_saveable_from_iterator.
 Add
tf.contrib.data.shuffle_and_repeat
.  Add new custom transformation:
tf.contrib.data.scan()
. tf.contrib.distributions.bijectors
: Add
tf.contrib.distributions.bijectors.MaskedAutoregressiveFlow
.  Add
tf.contrib.distributions.bijectors.Permute
.  Add
tf.contrib.distributions.bijectors.Gumbel
.  Add
tf.contrib.distributions.bijectors.Reshape
.  Support shape inference (i.e., shapes containing 1) in the Reshape bijector.
 Add
 Add
streaming_precision_recall_at_equal_thresholds,
a method for computing streaming precision and recall withO(num_thresholds + size of predictions)
time and space complexity.  Change
RunConfig
default behavior to not set a random seed, making random behavior independently random on distributed workers. We expect this to generally improve training performance. Models that do rely on determinism should set a random seed explicitly.  Replaced the implementation of
tf.flags
withabsl.flags
.  Add support for
CUBLAS_TENSOR_OP_MATH
in fp16 GEMM  Add support for CUDA on NVIDIA Tegra devices
Bug Fixes and Other Changes
 Documentation updates:
 Clarified that you can only install TensorFlow on 64bit machines.
 Added a short doc explaining how
Estimator
s save checkpoints.  Add documentation for ops supported by the
tf2xla
bridge.  Fix minor typos in the doc of
SpaceToDepth
andDepthToSpace
.  Updated documentation comments in
mfcc_mel_filterbank.h
andmfcc.h
to clarify that the input domain is squared magnitude spectra and the weighting is done on linear magnitude spectra (sqrt of inputs).  Change
tf.contrib.distributions
docstring examples to usetfd
alias rather thands
,bs
.  Fix docstring typos in
tf.distributions.bijectors.Bijector
. tf.assert_equal
no longer raisesValueError.
It now raisesInvalidArgumentError,
as documented. Update Getting Started docs and API intro.
 Google Cloud Storage (GCS):
 Add userspace DNS caching for the GCS client.
 Customize request timeouts for the GCS filesystem.
 Improve GCS filesystem caching.
 Bug Fixes:
 Fix bug where partitioned integer variables got their wrong shapes. Before
 Fix correctness bug in CPU and GPU implementations of Adadelta.
 Fix a bug in
import_meta_graph
's handling of partitioned variables when importing into a scope. WARNING: This may break loading checkpoints of graphs with partitioned variables saved after usingimport_meta_graph
with a nonemptyimport_scope
argument.  Fix bug in offline debugger which prevented viewing events.
 Added the
WorkerService.DeleteWorkerSession
method to the gRPC interface, to fix a memory leak. Ensure that your master and worker servers are running the same version of TensorFlow to avoid compatibility issues.  Fix bug in peephole implementation of BlockLSTM cell.
 Fix bug by casting dtype of
log_det_jacobian
to matchlog_prob
inTransformedDistribution
.  Fix a bug in
import_meta_graph
's handling of partitioned variables when  Ensure
tf.distributions.Multinomial
doesn't underflow inlog_prob
. Before this change, all partitions of an integer variable were initialized with the shape of the unpartitioned variable; after this change they are initialized correctly.
 Other:
 Add necessary shape util support for bfloat16.
 Add a way to run ops using a step function to MonitoredSession.
 Add
DenseFlipout
probabilistic layer.  A new flag
ignore_live_threads
is available on train. If set toTrue
, it will ignore threads that remain running when tearing down infrastructure after successfully completing training, instead of throwing a RuntimeError.  Restandardize
DenseVariational
as simpler template for other probabilistic layers. tf.data
now supportstf.SparseTensor
components in dataset elements. It is now possible to iterate over
Tensor
s.  Allow
SparseSegmentReduction
ops to have missing segment IDs.  Modify custom export strategy to account for multidimensional sparse float splits.
Conv2D
,Conv2DBackpropInput
,Conv2DBackpropFilter
now supports arbitrary dilations with GPU and cuDNNv6 support.Estimator
now supportsDataset
:input_fn
can return aDataset
instead ofTensor
s. Add
RevBlock
, a memoryefficient implementation of reversible residual layers.  Reduce BFCAllocator internal fragmentation.
 Add
cross_entropy
andkl_divergence
totf.distributions.Distribution
.  Add
tf.nn.softmax_cross_entropy_with_logits_v2
which enables backprop w.r.t. the labels.  GPU backend now uses
ptxas
to compile generated PTX. BufferAssignment
's protocol buffer dump is now deterministic. Change embedding op to use parallel version of
DynamicStitch
.  Add support for sparse multidimensional feature columns.
 Speed up the case for sparse float columns that have only 1 value.
 Allow sparse float splits to support multivalent feature columns.
 Add
quantile
totf.distributions.TransformedDistribution
.  Add
NCHW_VECT_C
support fortf.depth_to_space
on GPU.  Add
NCHW_VECT_C
support fortf.space_to_depth
on GPU.
API Changes
 Rename
SqueezeDims
attribute toAxis
in C++ API for Squeeze op. Stream::BlockHostUntilDone
now returns Status rather than bool. Minor refactor: move stats files from
stochastic
tocommon
and remove
stochastic
.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Adam Zahran, Ag Ramesh, Alan Lee, Alan Yee, Alex Sergeev, Alexander, Amir H. Jadidinejad,
Amy, Anastasios Doumoulakis, Andrei Costinescu, Andrei Nigmatulin, Anthony Platanios,
Anush Elangovan, arixlin, Armen Donigian, ArtëM Sobolev, Atlas7, Ben Barsdell, Bill Prin,
Bo Wang, Brett Koonce, Cameron Thomas, Carl Thomé, Cem Eteke, cglewis, Changming Sun,
Charles Shenton, ChiHung, Chris Donahue, Chris Filo Gorgolewski, Chris Hoyean Song,
Chris Tava, Christian Grail, Christoph Boeddeker, cinqS, Clayne Robison, codrut3, concerttttt,
CQY, Dan Becker, Dan Jarvis, Daniel Zhang, David Norman, dmaclach, Dmitry Trifonov,
Donggeon Lim, dongpilYu, Dr. Kashif Rasul, Edd WilderJames, Eric Lv, fcharras, Felix Abecassis,
FirefoxMetzger, formath, FredZhang, Gaojin Cao, Gary Deer, Guenther Schmuelling, Hanchen Li,
Hanmin Qin, hannesa2, hyunyoung2, Ilya Edrenkin, Jackson Kontny, Jan, Javier Luraschi,
Jay Young, Jayaram Bobba, Jeff, Jeff Carpenter, Jeremy Sharpe, Jeroen BéDorf, Jimmy Jia,
Jinze Bai, Jiongyan Zhang, Joe Castagneri, Johan Ju, Josh Varty, Julian Niedermeier,
JxKing, Karl Lessard, Kb Sriram, Keven Wang, KoanSin Tan, Kyle Mills, lanhin, LevineHuang,
Loki Der Quaeler, Loo Rong Jie, Luke Iwanski, LáSzló Csomor, Mahdi Abavisani, Mahmoud Abuzaina,
ManHyuk, Marek ŠUppa, MathSquared, Mats Linander, Matt Wytock, Matthew Daley, Maximilian Bachl,
mdymczyk, melvyniandrag, Michael Case, Mike Traynor, miqlas, NamrataIbm, Nathan Luehr,
Nathan Van Doorn, Noa Ezra, Nolan Liu, Oleg Zabluda, opensourcemattress, Ouwen Huang,
Paul Van Eck, peisong, Peng Yu, PinkySan, pks, powderluv, Qiao HaiJun, Qiao Longfei,
Rajendra Arora, Ralph Tang, resec, Robin Richtsfeld, Rohan Varma, Ryohei Kuroki, SaintNazaire,
Samuel He, Sandeep Dcunha, sandipmgiri, Sang Han, scott, Scott Mudge, SeWon Kim, Simon Perkins,
Simone Cirillo, Steffen Schmitz, Suvojit Manna, Sylvus, Taehoon Lee, Ted Chang, Thomas Deegan,
Till Hoffmann, Tim, Toni Kunic, Toon Verstraelen, Tristan Rice, Urs KöSter, Utkarsh Upadhyay,
Vish (Ishaya) Abrams, Winnie Tsang, Yan Chen, Yan Facai (颜发才), Yi Yang, Yong Tang,
Youssef Hesham, Yuan (Terry) Tang, Zhengsheng Wei, zxcqwe4906, 张志豪, 田传武
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
TensorFlow 1.5.0rc0
av8ramit released this
Release 1.5.0
Breaking Changes
 Prebuilt binaries are now built against CUDA 9 and cuDNN 7.
 Our Linux binaries are built using ubuntu 16 containers, potentially
introducing glibc incompatibility issues with ubuntu 14.  Starting from 1.6 release, our prebuilt binaries will use AVX instructions.
This may break TF on older CPUs.
Major Features And Improvements
 Eager execution
preview version is now available.  TensorFlow Lite
dev preview is now available.  CUDA 9 and cuDNN 7 support.
Bug Fixes and Other Changes
auto_correlation
added totf.contrib.distributions
. Add
DenseFlipout
probabilistic layer.  Restandardize
DenseVariational
as simpler template for other probabilistic layers.  Make
tf.contrib.distributions
QuadratureCompound classes support batch. Stream::BlockHostUntilDone
now returns Status rather than bool. Customize request timeouts for the GCS filesystem.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
4d55397500, Abdullah Alrasheed, abenmao, Adam Salvail, Aditya Dhulipala, Ag Ramesh,
Akimasa Kimura, Alan Du, Alan Yee, Alexander, Amit Kushwaha, Amy, Andrei Costinescu,
Andrei Nigmatulin, Andrew Erlichson, Andrew Myers, Andrew Stepanov, Androbin, AngryPowman,
Anish Shah, Anton Daitche, Artsiom Chapialiou, asdf2014, Aseem Raj Baranwal, Ash Hall,
Bart Kiers, Batchu Venkat Vishal, ben, Ben Barsdell, Bill Piel, Carl Thomé, Catalin Voss,
Changming Sun, Chengzhi Chen, Chi Zeng, Chris Antaki, Chris Donahue, Chris Oelmueller,
Chris Tava, Clayne Robison, Codrut, Courtial Florian, Dalmo Cirne, Dan J, Darren Garvey,
David Kristoffersson, David Norman, David RöThlisberger, DavidNorman, Dhruv, DimanNe,
Dorokhov, Duncan MacVicar P, EdwardDixon, EMCP, error.d, FAIJUL, Fan Xia,
Francois Xavier, Fred Reiss, Freedom" KoanSin Tan, Fritz Obermeyer, Gao, Xiang,
Guenther Schmuelling, Guo Yejun (郭叶军), Hans Gaiser, HectorSVC, Hyungsuk Yoon,
James Pruegsanusak, Jay Young, Jean Wanka, Jeff Carpenter, Jeremy Rutman, Jeroen BéDorf,
Jett Jones, Jimmy Jia, jinghuangintel, jinze1994, JKurland, Joel Hestness, joetoth,
John B Nelson, John Impallomeni, John Lawson, Jonas, Jonathan Dekhtiar, joshkyh, Jun Luan,
Jun Mei, Kai Sasaki, Karl Lessard, karl@kubx.ca, Kb Sriram, Kenichi Ueno, Kevin Slagle,
Kongsea, Lakshay Garg, lhlmgr, Lin Min, liu.guangcong, Loki Der Quaeler, Louie Helm,
lucasmoura, Luke Iwanski, Lyndon White, Mahmoud Abuzaina, Marcel Puyat, Mark Aaron Shirley,
Michele Colombo, MtDersvan, NamrataIbm, Nathan Luehr, Naurril, Nayana Thorat, Nicolas Lopez,
Niranjan Hasabnis, Nolan Liu, Nouce, Oliver Hennigh, osdamv, Patrik Erdes,
Patryk Chrabaszcz, Pavel Christof, Penghao Cen, postBG, Qingqing Cao, Qingying Chen, qjivy,
Raphael, Rasmi, raymondxyang, Renze Yu, resec, Roffel, Ruben Vereecken, Ryohei Kuroki,
sandipmgiri, Santiago Castro, Scott Kirkland, Sean Vig, Sebastian Raschka, Sebastian Weiss,
Sergey Kolesnikov, Sergii Khomenko, Shahid, Shivam Kotwalia, Stuart Berg, Sumit Gouthaman,
superzerg, Sven Mayer, tetris, Ti Zhou, Tiago Freitas Pereira, Tian Jin, Tomoaki Oiki,
Vaibhav Sood, vfdev, Vivek Rane, Vladimir Moskva, wangqr, Weber Xie, Will Frey,
Yan Facai (颜发才), yanivbl6, Yaroslav Bulatov, Yixing Lao, Yong Tang, youkaichao,
Yuan (Terry) Tang, Yue Zhang, Yuxin Wu, Ziming Dong, ZxYuan, 黄璞
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.
Watchers：8582 
Star：134189 
Fork：77361 
创建时间： 20151107 09:19:20 
最后Commits： 今天 
ea93078
Verified
goldiegadde released this
Sep 17, 2019
· 2 commits to r1.15 since this release
Release 1.15.0rc1
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
Major Features and Improvements
tensorflow
pip package will by default include GPU support (same astensorflowgpu
now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs.tensorflowgpu
will still be available, and CPUonly packages can be downloaded attensorflowcpu
for users who are concerned about package size.compat.v2 module
. It contains a copy of the 1.15 main module (withoutcontrib
) in thecompat.v1 module
. TensorFlow 1.15 is able to emulate 2.0 behavior using theenable_v2_behavior()
function.This enables writing forward compatible code: by explicitly importing either
tensorflow.compat.v1
ortensorflow.compat.v2
, you can ensure that your code works without modifications against an installation of 1.15 or 2.0.tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
for enabling/disabling v2 control flow.tf.enable_v2_behavior()
andTF2_BEHAVIOR=1
.tf.function
decorated functions. AutoGraph is also applied in functions used withtf.data
,tf.distribute
andtf.keras
APIS.enable_tensor_equality()
, which switches the behavior such that:tf.train.experimental.enable_mixed_precision_graph_rewrite()
.TF_CUDNN_DETERMINISTIC
. Setting to "true" or "1" forces the selection of deterministic cuDNN convolution and maxpooling algorithms. When this is enabled, the algorithm selection procedure itself is also deterministic.TrtGraphConverter
API for TensorRT conversion.Gather
,Slice
,Pack
,Unpack
,ArgMin
,ArgMax
,DepthSpaceShuffle
).CombinedNonMaxSuppression
in TensorRT conversion whichsignificantly accelerates object detection models.
Breaking Changes
constraint=
and.constraint
with ResourceVariable.tf.keras
:OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed.float32
, and automatically cast their inputs to the layer's dtype. If you had a model that usedfloat64
, it will probably silently usefloat32
in TensorFlow2, and a warning will be issued that starts with Layer "layername" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 withtf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information.tf.assert_*
methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked nonfeedable. In other words, if they are used as keys infeed_dict
argument tosession.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different perop random seeds when they are not given explicitly (most often).Bug Fixes and Other Changes
tf.estimator
:tf.keras.estimator.model_to_estimator
now supports exporting totf.train.Checkpoint
format, which allows the saved checkpoints to be compatible withmodel.load_weights
.DenseFeatures
usability in TF2tf.data
:unbatch
from experimental to core API.from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.tf.keras
:tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible withmodel.load_weights
.tf.saved_model.save
now saves the list of variables, trainable variables, regularization losses, and the call function.tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
andtf.keras.models.load_model
instead.implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models.experimental_run_tf_function
flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted toDataset
. 2. When distribution strategy is not specified this goes through the noop distribution strategy path. 3. Execution is wrapped in tf.function unlessrun_eagerly=True
is set in compile.batch_size
argument is used when input is dataset/generator/keras sequence.tf.lite
GATHER
support to NN API delegate.QUANTIZE
.QUANTIZED_16BIT_LSTM
.cycle_length
argument oftf.data.Dataset.interleave
to the number of schedulable CPU cores.parallel_for
: Add converter forMatrixDiag
.narrow_range
attribute toQuantizeAndDequantizeV2
and V3.tf.strings.unsorted_segment_join
.topK_v2
.TypeSpec
classes.Head
as public API.batch_dims
case.tf.sparse.from_dense
utility function.TensorFlowTestCase
.ResizeInputTensor
now works for all delegates.EXPAND_DIMS
support to NN API delegate TEST: expand_dims_testtf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources.tf.cond
,tf.while
andif
andwhile
in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect nonV2 control flow.tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.LogSoftMax
.nested_value_rowids
for ragged tensors.tf.math.cumulative_logsumexp operation
.tf.ragged.stack
.AddNewInputConstantTensor
.MemoryAllocation::MemoryAllocation()
.NNAPIDelegateKernel
from nnapi_delegate.ccFusedBatchNormV3
in converter.tf.gradients()
.precision_mode
argument toTrtGraphConverter
is now case insensitive.Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, CaptainPool, Casper Da CostaLuis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon ItoFisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TóMas HallgríMsson, HarikrishnanBalagopal, HåKon Sandsmark, IHong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BéDorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, KbhuteIbm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, LeslieFang, LeslieFangIntel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret MaynardReid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, SanaDamani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, TaeHwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, WenHeng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, YannYy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, 王振华 (Zhenhua Wang)