From 9e7bf403817a3acd4e8d865b041f37609564076e Mon Sep 17 00:00:00 2001 From: drpngx Date: Mon, 10 Apr 2017 13:55:56 -0700 Subject: [PATCH] Branch 152703253 (#9112) * Improve py_func error handling. Automatically translate some python errors into corresponding TF errors at runtime. Change: 152156821 * Update interaction with libpng so that we use the public API instead of knowledge of the internal libpng data structures. Change: 152167754 * TensorBoard plugins now contain their own name/route prefix. Change: 152167807 * Passes trainable flag to separable_conv2d biases. Change: 152170239 * Saving resource variables with a caching device. Change: 152171539 * Drop loss from estimator_spec.eval_metric_ops, as required by core Estimator. Change: 152179924 * sample_stats.percentile DOCFIX. Change: 152182295 * Added a memory optimizer to grappler. Change: 152184170 * Change default behavior of the tf runs selector: - If there are fewer than 41 runs, enable them all by default - If there are 41 runs or more, disable them all by default This is in response to user complaints that having it enable only the first ten runs by default was confusing, because it was not obvious to users that some runs had been disabled. However, it still solves the initial user complaint that having very many runs simultaneously enabled would lag the UI. I also changed the "toggle all runs" button to try to turn everything off before turning everything on. Also, I improved the logic for detecting when the runs selection is back in the default state, so that we can avoid generating long URI strings wherever possible. Change: 152188948 * Autogenerated Change: Change TensorBoard TAG to 52 Change: 152189000 * Remove warning that only happening with config cuda. Change: 152189205 * Make resource variable shared name consistent with non-resource variables. Remove colocation constraint from resource variable cached value with the variable itself. Change: 152192203 * Add a way to specify the optimization order; refactor and add constant folding to meta optimizer. Change: 152193646 * Backport fixes and improvements from external Keras. Change: 152198296 * Merge changes from github. Change: 152200430 * Go: Update generated wrapper functions for TensorFlow ops. Change: 152200754 * Update ops-related pbtxt files. Change: 152203174 * Make ImportGraphDef() work with functions. In addition to modify graph_constructor.cc, this patch adds some other functionality to enable importing fucntions: * Ability to add FunctionDefLibraries to Graphs and FunctionLibraryDefinitions (in addition to existing functions) * FunctionDefsEqual() utility function Change: 152205258 * Expand contrib test to more than just test targets. Change: 152206822 * Preserve graph version during optimization Change: 152213262 * Exclude enter and exit nodes from shape refiner's constant folding. Change: 152213637 * Allow reshape_mover and algebraic_simplifier to make multiple mutations, by avoiding the short-circuit std::any_of. Change: 152232810 * Fix dynamic_rnn transpose bug (can input/output non-3d tensors). Also a few cleanups to RNN code. Change: 152267628 * Fix flaky tests Change: 152272801 * Add an auto parallelization grappler optimization pass. Change: 152276787 * Change json.decode.JSONDecodeError to ValueError. JSONDecodeError seems to be the exception used in the simplejson module, not the json module. Change: 152278012 * Internal change. Change: 152281471 * [XLA] Force buffer sharing of separate while instructions. Change: 152288540 * replica_device_setter should work for resource variables Change: 152289915 * Fix ./configure script 1. Add %workspace% in .bazelrc file when using import statement 2. Write action_env into bazelrc file for required environment variables for OpenCL support Change: 152290700 * Pointing a number of Tensorboard graph visualization-related help links to the new locations for the correspondent API documentation. Change: 152293459 * Restore most of pull request #8606 Pull request #8606 added str(Label(...)) for most dependencies in tensorflow.bzl, allowing most functions to be used from repositories which include TensorFlow as a submodule. Unfortunately, it broke when pulled into Google and was removed in cl/152200430. This CL restores the change, except for two Android-only functions; these were the only problematic bits. Change: 152297413 * Removed dead code in Estimator. Change: 152297597 * Assert rank is at least equal to new_rank for `_sparse_inner_flatten`. Change: 152303319 * Extend quantization ranges to include 0.0f. Change: 152304380 * Remove Keras config file saving. Change: 152306552 * API backwards compatibility tests. Change: 152310869 * [TF:XLA] Add a test for an R3 -> R4 broadcast. Change: 152313967 * Fix the problem that no enough placeholders for persistent tensor batch delete The deleter_key is always a device_name, hence there is only one of it. Hence, we cannot delete >1 handles at one time. In the fix, it creates delete placeholder on demand, the max number of placeholders is _DEAD_HANDLES_THRESHOLD. Change: 152322770 * [XLA] Add several reduction tests. Change: 152323510 * Added the memory optimizer to the meta optimizer. Change: 152323689 * Started a set of utilities to categorize op types Change: 152329057 * Add AudioSpectrogram op to TensorFlow for audio feature generation Change: 152332221 * Update ops-related pbtxt files. Change: 152332812 * Automated rollback of change 152332221 Change: 152333917 * Call Py_CLEAR on dead fields during TF_RESOURCE-to-ndarray conversion Change: 152338333 * [TF contrib seq2seq] Initial, incomplete implementation of beam search decoder. **DOES NOT WORK, pushed for collaboration only** Change: 152343927 * [XLA] Change HloPassPipeline to disallow Add* calls after Run. Change: 152345578 * Automated rollback of change 152332812 Change: 152349057 * Remove all 64/32 bit compiler warnings from core/ops. Change: 152353506 * libtensorflow.so: Don't export private symbols. With this change, libtensorflow.so will only export functions defined in c_api.h. This also results in a decreased binary size of libtensorflow.so. On Linux the decrease was from roughly 150MB to 67MB. On OS X it was from roughly 101MB to 82MB. Also fixes #8923 Change: 152366053 * Add Elu ops in XLA. Change: 152383201 * Fixed test. ('broadcast_dims' has size 1) Change: 152383633 * Add more detailed error message for rank assertion in _sparse_inner_flatten. Change: 152397909 * tensor_bundle: propagrates errors related to directory creation. Change: 152401909 * matrix_adjoint added to contrib/linalg/linear_operator_util Change: 152404828 * Add an is_active method to plugins This method determines whether a plugin is active. A plugin may be inactive if say it lacks data. This new is_active method allows us to add a route to TensorBoard noting which plugins are active. The frontend could then avoid querying routes of inactive plugins. Change: 152406232 * Replace a gather op for shapes by a stack op so dilated convolutions can be placed on GPU even with strict placing (before the gather went to CPU). Change: 152411159 * [TF:XLA] Implement BatchToSpace, BatchToSpaceND, SpaceToBatch, SpaceToBatchND. Fix crashes in core implementations of the same operators for zero-sized blocks. Change: 152416903 * Estimator saves relative paths in checkpoint. Change: 152420211 * Fix layers_test exception regex matching. Change: 152422855 * Unhide bijectors. Correct TransformedDistribution docstring. Change: 152424418 * Choosing a saner default for min_eval_frequency in the constructor for Experiment for the GCS file system, because the default of 1 causes performance problems. Change: 152439984 * Inherit use_resource from scope for partitioned variables. Change: 152442103 * Support quantized reshape in hexagon runtime Change: 152445539 * tfdbg CLI: add command list_source (ls) + UI fixes and improvements The new list_source (shorthand: ls) command lists Python source files responsible for constructing the nodes and tensors encountered in the run() call. It divides the source files into two categories and list them separately. 1) files that are not part of the TensorFlow Python library, and 2) files that are a part of it. The list contains information about how many nodes, tensors and dumps of tensors the files is responsible for. The file paths contain clickable links to the existing print_source/ps command. The list_source/ls command supports filtering by file-path and node-name regex patterns. UI fixes: * Fixed inconsistent black vs. transparent background color that made the layout look messy on some terminal types. Now using the transparent color for default font color consistently. * In the print_source command output, add clickable links to expand source lines and graph elements. Change: 152446002 * tfcompile: Be a little more verbose about missing required flags. Fixes #9014 Change: 152446338 * Disable failing test cases in pooling_ops_test. Change: 152447322 * Register more types for tf.image_crop_and_resize(). Resolves #9020. Change: 152448160 * Automated rollback of change 152439984 Change: 152450929 * Add a route to TensorBoard for fetching plugin names Specifically, we add a /data/plugins_listing route to the TensorBoard application. This route responds with an object mapping the name of each initialized plugin to whether it is active. This route could help the frontend avoid issuing requests to inactive plugins. Ordered the listing of routes within application.py so there is a little more organization. Refactored the test for application to use a fake plugin. Change: 152451390 * Added the ability to retrieve the amount of usable gpu memory Change: 152453470 * Allow to set session ConfigProto in RunConfig and use it in Estimator. Change: 152454548 * Colocate ResourceVariable reads with their handles. Change: 152455939 * tfdbg: update doc for new command list_source/ls Change: 152456128 * Make rnn directions slightly easier to follow. Change: 152456296 * Internal change Change: 152458104 * Adds batch renormalization. NOTE: if you use renormalization, you might want to use faster moving average updates, i.e. lower `decay` values. Change: 152458872 * When using ImportGraphDef with a passed in ShapeRefiner, use the producer version of the GraphDef when importing; the ShapeRefiner may be initialized with a different graph_def_version, so we need to be able to override it. The test failed without the change to graph_constructor and passes with it. The test uses a legacy graph that is supported (reduction shape). Change: 152459169 * Allow any iterable for `export_strategies` arg. Change: 152461826 * Log steps/sec every 100 steps in MonitoredSession, as before. Change: 152465320 * Fixes documentation to note that the in case of ties the identity of the return value of ArgMin and ArgMaxis not guaranteed . Change: 152465346 * Automated rollback of change 152465346 Change: 152465844 * Fix shape inference fn on _ParallelConcatStart. Change: 152466076 * Fix getting started guide Explain numerical differences in loss fix one example to print Change: 152466119 * Remove superfluous mode argument. Change: 152467334 * Add a tool that converts HLO computations to tensorflow GraphDef which can be visualized on Tensorboard. This CL defines basic tensorflow::OpDef for each HLO instruction/node. More attributes (e.g. shapes, colors) will be added in the future. Change: 152477918 * [TF:XLA] Increase shard count of //third_party/tensorflow/compiler/tests:spacetobatch_test to reduce flakiness when built under ASAN. Change: 152496244 * Make projector plugin backend read assets saved via the PluginAssets API. At the same time, keep backwards compatibility with the old way of looking up assets. Change: 152504793 * Move MNIST pointers to mirror hosted by the CVDF on Google Cloud. Fixes: #9031 Change: 152504901 * Merge changes from github. Change: 152508170 * Update API after changing default step couter frequency before. Change: 152517535 * Move a few random op helper functions to header files 1. shape_inference::RandomShape 2. OpKernel::MakeShape(Tensor, TensorShape*) Change: 152522156 * addresses the divide by zero bug Change: 152522488 * Clarify doc on tf.assign. Change: 152523909 * Sparse adam for resource variables. Change: 152525327 * Automated rollback of change 152310869 Change: 152528732 * Add an env_var tf_sync_on_finish_bool that block until device has finished all queued operations in a step if true. Change: 152533676 * Add more node attributes for HloInstruction on Tensorboard e.g. shape and layout etc. Change: 152534472 * Add tf.complex64 GPU support to tf.gather. Also add ldg specializations for std::complex. Change: 152537848 * Formatting changes Change: 152544842 * Upgrade TensorBoard TypeScript to 2.2.1 See also: #8326 Change: 152545950 * TEST: Getting reasonable test sizes on linalg library, removing need for sharding. Change: 152546409 * Disabling _testSourceUtilModuleReturnsTrue as its causing opensource issues. Change: 152548721 * Fix race due to unsafe buffer forwarding in maxpooling second order gradients added in #6664. Re-enable previously flaky tests. Clean up a few minor things in maxpooling_op_gpu.cu.cc Change: 152550050 * LinearOperator: adjoint_arg kwarg added to all operators. Now, operator.apply(x, adjoint_arg=True) means that the adjoint of 'x' is taken before application of operator. Sometimes this is done more efficiently than simply taking adjoint. Change: 152560471 * Adds weighted_average_loss metric key. Change: 152560999 * Documentation: Fix bug in manual device placement example Change: 152563392 * Change for internal compatibility. * Use std::vector for storage instead of map. Do the sorting inplace and return the same vector to avoid any copies. On larger streams it is about 50% faster. Change: 152576112 * Add tf.add_n GPU support for complex64/complex128. Also adds a unit test for tf.add_n. Change: 152577190 * - Adds support for nested types in tf.case and tf.cond. - Adds a "strict" mode which disables silent unpacking of singleton lists. - Adds shape inference to tf.case. - Adds a lot of unit tests. Change: 152581097 * [XLA] Add support for folding transpose into convolution Change: 152581336 * Add a smoke test to ensure that the doc generator runs. Change: 152592164 * Add tensorboard to the _do_not_descend_map of the PublicAPIVisitor. Change: 152592268 * Add auto parallelization to meta optimizer. Enable MetaOptimizer if any one of the optimizers is on. Change: 152598517 * Update ops-related pbtxt files. Change: 152629248 * Prevent the renorm_weight from being updated too early. Change: 152631776 * Automated rollback of change 152528732 Change: 152652473 * Construct TensorBoard dashboards in a JS list Previously, adding a dashboard to TensorBoard involved changing logic in several places. As part of this effort, added constructors to dashboards. Tweaked logic in various dashboards to preserve original behavior. For instance, the graph dashboard can only perform fitting after the dashboard is attached to the DOM. Change: 152658532 * Make CheckpointSaverListener visible next to CheckpointSaverHook. Change: 152662945 * tfdbg CLI: minor bug fixes 1: The calculation of the scroll command in the scroll bar didn't take into account that the y-coordinate of the scroll block is in the ScrollBar coordinate system, while the mouse click y-coordinate is in the screen coordinate system. 2: The y position of the ScrollBar was off by one. 3: The command box is not re-created after mouse-triggered commands, leading to strange-looking cursor position. Change: 152684294 * Remove obsolete use of validate_indices from embedding_ops.py validate_indices is ignored, so it shouldn't appear in new code. Change: 152691948 * Preparation of using GMock matchers in XLA tests. Change: 152691970 * Replace RuntimeException by RuntimeError in coordinator documentation. Change: 152697758 * Move the TensorBoard debugger plugin to be internal. This feature is currently not open-source anyway. Change: 152700267 * Add a single-machine tf.learn Estimator implementation for the WALS solver. Change: 152700915 * Add tf.contrib.training.python_input -- making it easy to feed data into TensorFlow from python coroutines. Change: 152701623 * Show that QuantizeToFloat consistently introduces a small error. The error is equal to range_min - round(range_min / range_scale) * range_scale Change: 152702015 * Internal Changes Change: 152703253 * Remove tensorflow/tensorboard/plugins/debugger, as part of merge resolution. --- tensorflow/BUILD | 9 +- tensorflow/compiler/xla/BUILD | 14 +- tensorflow/compiler/xla/literal_util_test.cc | 48 +- tensorflow/compiler/xla/service/BUILD | 4 + .../compiler/xla/service/cpu/cpu_compiler.cc | 9 +- .../compiler/xla/service/gpu/gpu_compiler.cc | 8 +- .../xla/service/gpu/ir_emission_utils.h | 11 +- .../compiler/xla/service/transpose_folding.cc | 148 +- .../compiler/xla/service/transpose_folding.h | 31 +- .../xla/service/transpose_folding_test.cc | 188 +- tensorflow/compiler/xla/test.h | 48 + .../lib/quantiles/weighted_quantiles_buffer.h | 50 +- .../weighted_quantiles_buffer_test.cc | 57 +- .../lib/quantiles/weighted_quantiles_stream.h | 4 +- .../weighted_quantiles_summary_test.cc | 11 +- tensorflow/contrib/cmake/tf_python.cmake | 1 - tensorflow/contrib/factorization/BUILD | 1 + .../python/ops/factorization_ops.py | 27 +- .../contrib/factorization/python/ops/wals.py | 321 +++ .../factorization/python/ops/wals_test.py | 305 ++- .../layers/python/layers/embedding_ops.py | 15 +- .../kernel_tests/linear_operator_test.py | 5 +- .../linalg/python/ops/linear_operator.py | 39 +- .../python/ops/linear_operator_composition.py | 14 +- .../linalg/python/ops/linear_operator_diag.py | 6 +- .../python/ops/linear_operator_full_matrix.py | 9 +- .../python/ops/linear_operator_identity.py | 13 +- .../python/ops/linear_operator_test_util.py | 66 +- .../linalg/python/ops/linear_operator_tril.py | 8 +- .../python/ops/linear_operator_udvh_update.py | 12 +- tensorflow/contrib/training/BUILD | 23 + tensorflow/contrib/training/__init__.py | 2 + .../training/python/training/bucket_ops.py | 12 +- .../training/python/training/python_input.py | 178 ++ .../python/training/python_input_test.py | 191 ++ tensorflow/core/grappler/BUILD | 1 + tensorflow/core/grappler/optimizers/BUILD | 2 + .../core/grappler/optimizers/auto_parallel.h | 4 +- .../grappler/optimizers/meta_optimizer.cc | 18 +- tensorflow/core/kernels/aggregate_ops.cc | 8 +- .../core/kernels/aggregate_ops_gpu.cu.cc | 2 + .../core/kernels/quantization_utils_test.cc | 18 + tensorflow/core/ops/array_ops.cc | 8 +- .../core/protobuf/rewriter_config.proto | 7 + tensorflow/docs_src/tutorials/using_gpu.md | 7 +- tensorflow/python/__init__.py | 1 + tensorflow/python/debug/cli/curses_ui.py | 30 +- tensorflow/python/debug/cli/curses_ui_test.py | 21 +- tensorflow/python/estimator/model_fn.py | 1 + tensorflow/python/kernel_tests/BUILD | 13 + .../python/kernel_tests/aggregate_ops_test.py | 79 + tensorflow/python/ops/control_flow_ops.py | 188 +- .../python/ops/control_flow_ops_test.py | 345 +++ tensorflow/python/ops/embedding_ops.py | 23 +- tensorflow/python/tools/inspect_checkpoint.py | 2 +- tensorflow/python/training/coordinator.py | 4 +- tensorflow/python/training/training.py | 2 + tensorflow/tensorboard/BUILD | 2 + tensorflow/tensorboard/backend/BUILD | 3 - tensorflow/tensorboard/backend/application.py | 28 +- .../tensorboard/backend/application_test.py | 56 +- .../tf-audio-dashboard.html | 6 +- .../components/tf_dashboard_common/BUILD | 2 + .../tf_dashboard_common/dashboard-behavior.ts | 40 + .../tf_dashboard_common/tf-dashboard.html | 1 + .../tf_dashboard_common/tf-run-selector.html | 25 +- .../tf-distribution-dashboard.html | 6 +- .../components/tf_graph/tf-graph-scene.html | 35 +- .../tf-graph-dashboard.html | 7 +- .../tf-histogram-dashboard.html | 6 +- .../tf-image-dashboard.html | 10 +- .../tf-scalar-dashboard.html | 7 +- .../tf_tensorboard/tf-tensorboard.html | 174 +- .../tf_text_dashboard/tf-text-dashboard.html | 11 +- .../vz_projector/vz-projector-dashboard.html | 28 +- tensorflow/tensorboard/plugins/debugger/BUILD | 55 - .../plugins/debugger/debugger_plugin.py | 355 --- .../plugins/debugger/debugger_plugin_test.py | 300 --- tensorflow/tensorboard/tensorboard.py | 26 +- tensorflow/tools/api/golden/BUILD | 24 + .../tensorflow.-aggregation-method.pbtxt | 24 + .../tensorflow.-attr-value.-list-value.pbtxt | 108 + .../api/golden/tensorflow.-attr-value.pbtxt | 120 + .../tensorflow.-auto-parallel-options.pbtxt | 84 + ...orflow.-conditional-accumulator-base.pbtxt | 29 + .../tensorflow.-conditional-accumulator.pbtxt | 38 + ...ow.-config-proto.-device-count-entry.pbtxt | 84 + .../api/golden/tensorflow.-config-proto.pbtxt | 132 ++ .../tools/api/golden/tensorflow.-d-type.pbtxt | 77 + .../api/golden/tensorflow.-device-spec.pbtxt | 37 + .../api/golden/tensorflow.-dimension.pbtxt | 25 + .../tools/api/golden/tensorflow.-event.pbtxt | 112 + .../golden/tensorflow.-f-i-f-o-queue.pbtxt | 62 + .../tensorflow.-fixed-len-feature.pbtxt | 27 + ...nsorflow.-fixed-len-sequence-feature.pbtxt | 31 + ...nsorflow.-fixed-length-record-reader.pbtxt | 46 + .../golden/tensorflow.-g-p-u-options.pbtxt | 104 + .../api/golden/tensorflow.-graph-def.pbtxt | 92 + .../api/golden/tensorflow.-graph-keys.pbtxt | 136 ++ .../golden/tensorflow.-graph-options.pbtxt | 112 + .../tools/api/golden/tensorflow.-graph.pbtxt | 129 ++ .../golden/tensorflow.-histogram-proto.pbtxt | 104 + .../golden/tensorflow.-identity-reader.pbtxt | 46 + .../golden/tensorflow.-indexed-slices.pbtxt | 42 + .../tensorflow.-interactive-session.pbtxt | 43 + .../api/golden/tensorflow.-log-message.pbtxt | 112 + ...nsorflow.-name-attr-list.-attr-entry.pbtxt | 84 + .../golden/tensorflow.-name-attr-list.pbtxt | 88 + .../tensorflow.-node-def.-attr-entry.pbtxt | 84 + .../api/golden/tensorflow.-node-def.pbtxt | 100 + .../api/golden/tensorflow.-op-error.pbtxt | 29 + .../api/golden/tensorflow.-operation.pbtxt | 65 + .../tensorflow.-optimizer-options.pbtxt | 128 ++ .../tensorflow.-padding-f-i-f-o-queue.pbtxt | 62 + .../golden/tensorflow.-priority-queue.pbtxt | 62 + .../api/golden/tensorflow.-queue-base.pbtxt | 61 + .../tensorflow.-random-shuffle-queue.pbtxt | 62 + .../api/golden/tensorflow.-reader-base.pbtxt | 45 + .../tensorflow.-register-gradient.pbtxt | 9 + .../golden/tensorflow.-rewriter-config.pbtxt | 112 + .../api/golden/tensorflow.-run-metadata.pbtxt | 88 + .../api/golden/tensorflow.-run-options.pbtxt | 116 + .../api/golden/tensorflow.-session-log.pbtxt | 108 + .../api/golden/tensorflow.-session.pbtxt | 47 + ...flow.-sparse-conditional-accumulator.pbtxt | 46 + .../golden/tensorflow.-sparse-feature.pbtxt | 35 + .../tensorflow.-sparse-tensor-value.pbtxt | 26 + .../golden/tensorflow.-sparse-tensor.pbtxt | 46 + .../golden/tensorflow.-summary.-audio.pbtxt | 96 + .../golden/tensorflow.-summary.-image.pbtxt | 92 + .../golden/tensorflow.-summary.-value.pbtxt | 108 + .../api/golden/tensorflow.-summary.pbtxt | 92 + .../tensorflow.-t-f-record-reader.pbtxt | 46 + .../api/golden/tensorflow.-tensor-array.pbtxt | 69 + .../api/golden/tensorflow.-tensor-info.pbtxt | 88 + .../api/golden/tensorflow.-tensor-shape.pbtxt | 73 + .../tools/api/golden/tensorflow.-tensor.pbtxt | 58 + .../golden/tensorflow.-text-line-reader.pbtxt | 46 + .../golden/tensorflow.-var-len-feature.pbtxt | 19 + .../golden/tensorflow.-variable-scope.pbtxt | 97 + ...ensorflow.-variable.-save-slice-info.pbtxt | 17 + .../api/golden/tensorflow.-variable.pbtxt | 101 + .../tensorflow.-whole-file-reader.pbtxt | 46 + .../tools/api/golden/tensorflow.app.pbtxt | 11 + .../tools/api/golden/tensorflow.compat.pbtxt | 35 + .../tensorflow.constant_initializer.pbtxt | 10 + .../tensorflow.errors.-aborted-error.pbtxt | 30 + ...sorflow.errors.-already-exists-error.pbtxt | 30 + .../tensorflow.errors.-cancelled-error.pbtxt | 30 + .../tensorflow.errors.-data-loss-error.pbtxt | 30 + ...flow.errors.-deadline-exceeded-error.pbtxt | 30 + ...ow.errors.-failed-precondition-error.pbtxt | 30 + .../tensorflow.errors.-internal-error.pbtxt | 30 + ...rflow.errors.-invalid-argument-error.pbtxt | 30 + .../tensorflow.errors.-not-found-error.pbtxt | 30 + .../golden/tensorflow.errors.-op-error.pbtxt | 29 + ...ensorflow.errors.-out-of-range-error.pbtxt | 30 + ...flow.errors.-permission-denied-error.pbtxt | 30 + ...low.errors.-resource-exhausted-error.pbtxt | 30 + ...orflow.errors.-unauthenticated-error.pbtxt | 30 + ...tensorflow.errors.-unavailable-error.pbtxt | 30 + ...nsorflow.errors.-unimplemented-error.pbtxt | 30 + .../tensorflow.errors.-unknown-error.pbtxt | 30 + .../tools/api/golden/tensorflow.errors.pbtxt | 151 ++ ...tensorflow.estimator.-estimator-spec.pbtxt | 47 + .../tensorflow.estimator.-estimator.pbtxt | 37 + .../tensorflow.estimator.-mode-keys.pbtxt | 20 + .../tensorflow.estimator.-run-config.pbtxt | 68 + ...-classification-output.__metaclass__.pbtxt | 14 + ...imator.export.-classification-output.pbtxt | 22 + ....export.-export-output.__metaclass__.pbtxt | 14 + ...flow.estimator.export.-export-output.pbtxt | 12 + ...export.-predict-output.__metaclass__.pbtxt | 14 + ...low.estimator.export.-predict-output.pbtxt | 18 + ...ort.-regression-output.__metaclass__.pbtxt | 14 + ....estimator.export.-regression-output.pbtxt | 18 + ...mator.export.-serving-input-receiver.pbtxt | 23 + .../golden/tensorflow.estimator.export.pbtxt | 31 + .../golden/tensorflow.estimator.inputs.pbtxt | 11 + .../api/golden/tensorflow.estimator.pbtxt | 27 + .../tensorflow.gfile.-fast-g-file.pbtxt | 58 + .../api/golden/tensorflow.gfile.-g-file.pbtxt | 58 + .../api/golden/tensorflow.gfile.-open.pbtxt | 58 + .../tools/api/golden/tensorflow.gfile.pbtxt | 63 + .../api/golden/tensorflow.graph_util.pbtxt | 23 + .../tensorflow.image.-resize-method.pbtxt | 24 + .../tools/api/golden/tensorflow.image.pbtxt | 175 ++ .../tools/api/golden/tensorflow.layers.pbtxt | 59 + .../tools/api/golden/tensorflow.logging.pbtxt | 83 + .../tools/api/golden/tensorflow.losses.pbtxt | 63 + .../tools/api/golden/tensorflow.metrics.pbtxt | 99 + .../tools/api/golden/tensorflow.nn.pbtxt | 323 +++ .../golden/tensorflow.ones_initializer.pbtxt | 10 + .../tensorflow.orthogonal_initializer.pbtxt | 10 + tensorflow/tools/api/golden/tensorflow.pbtxt | 1947 +++++++++++++++++ ...thon_io.-t-f-record-compression-type.pbtxt | 20 + ...orflow.python_io.-t-f-record-options.pbtxt | 17 + ...sorflow.python_io.-t-f-record-writer.pbtxt | 17 + .../api/golden/tensorflow.python_io.pbtxt | 19 + ...tensorflow.random_normal_initializer.pbtxt | 10 + ...ensorflow.random_uniform_initializer.pbtxt | 10 + .../golden/tensorflow.resource_loader.pbtxt | 23 + ...d_model.builder.-saved-model-builder.pbtxt | 21 + .../tensorflow.saved_model.builder.pbtxt | 7 + .../tensorflow.saved_model.constants.pbtxt | 39 + .../tensorflow.saved_model.loader.pbtxt | 11 + .../tensorflow.saved_model.main_op.pbtxt | 11 + .../api/golden/tensorflow.saved_model.pbtxt | 35 + ...flow.saved_model.signature_constants.pbtxt | 47 + ...flow.saved_model.signature_def_utils.pbtxt | 19 + ...tensorflow.saved_model.tag_constants.pbtxt | 11 + .../golden/tensorflow.saved_model.utils.pbtxt | 7 + .../tools/api/golden/tensorflow.sdca.pbtxt | 3 + .../tools/api/golden/tensorflow.sets.pbtxt | 19 + .../api/golden/tensorflow.spectral.pbtxt | 51 + .../golden/tensorflow.summary.-event.pbtxt | 112 + ...ensorflow.summary.-file-writer-cache.pbtxt | 16 + .../tensorflow.summary.-file-writer.pbtxt | 50 + .../tensorflow.summary.-session-log.pbtxt | 108 + ...sorflow.summary.-summary-description.pbtxt | 80 + .../tensorflow.summary.-summary.-audio.pbtxt | 96 + .../tensorflow.summary.-summary.-image.pbtxt | 92 + .../tensorflow.summary.-summary.-value.pbtxt | 108 + .../golden/tensorflow.summary.-summary.pbtxt | 92 + ...sorflow.summary.-tagged-run-metadata.pbtxt | 84 + .../tools/api/golden/tensorflow.summary.pbtxt | 67 + .../api/golden/tensorflow.sysconfig.pbtxt | 11 + .../golden/tensorflow.test.-benchmark.pbtxt | 21 + .../tools/api/golden/tensorflow.test.pbtxt | 51 + ...tensorflow.train.-adadelta-optimizer.pbtxt | 46 + ...sorflow.train.-adagrad-d-a-optimizer.pbtxt | 46 + .../tensorflow.train.-adagrad-optimizer.pbtxt | 46 + .../tensorflow.train.-adam-optimizer.pbtxt | 46 + .../golden/tensorflow.train.-bytes-list.pbtxt | 80 + ...sorflow.train.-checkpoint-saver-hook.pbtxt | 30 + ...low.train.-checkpoint-saver-listener.pbtxt | 24 + ...sorflow.train.-chief-session-creator.pbtxt | 14 + .../tensorflow.train.-cluster-def.pbtxt | 80 + .../tensorflow.train.-cluster-spec.pbtxt | 37 + .../tensorflow.train.-coordinator.pbtxt | 45 + .../golden/tensorflow.train.-example.pbtxt | 80 + ...ow.train.-exponential-moving-average.pbtxt | 25 + .../tensorflow.train.-feature-list.pbtxt | 80 + ...n.-feature-lists.-feature-list-entry.pbtxt | 84 + .../tensorflow.train.-feature-lists.pbtxt | 84 + .../golden/tensorflow.train.-feature.pbtxt | 88 + ...rflow.train.-features.-feature-entry.pbtxt | 84 + .../golden/tensorflow.train.-features.pbtxt | 84 + .../tensorflow.train.-feed-fn-hook.pbtxt | 30 + .../tensorflow.train.-final-ops-hook.pbtxt | 34 + .../golden/tensorflow.train.-float-list.pbtxt | 80 + .../tensorflow.train.-ftrl-optimizer.pbtxt | 46 + ...rflow.train.-global-step-waiter-hook.pbtxt | 30 + ...ow.train.-gradient-descent-optimizer.pbtxt | 46 + .../golden/tensorflow.train.-int64-list.pbtxt | 80 + ...nsorflow.train.-job-def.-tasks-entry.pbtxt | 84 + .../golden/tensorflow.train.-job-def.pbtxt | 88 + ...ensorflow.train.-logging-tensor-hook.pbtxt | 30 + .../tensorflow.train.-looper-thread.pbtxt | 73 + ...tensorflow.train.-momentum-optimizer.pbtxt | 46 + .../tensorflow.train.-monitored-session.pbtxt | 26 + ...rain.-nan-loss-during-training-error.pbtxt | 16 + .../tensorflow.train.-nan-tensor-hook.pbtxt | 30 + .../golden/tensorflow.train.-optimizer.pbtxt | 45 + ...ow.train.-proximal-adagrad-optimizer.pbtxt | 46 + ...-proximal-gradient-descent-optimizer.pbtxt | 46 + .../tensorflow.train.-queue-runner.pbtxt | 49 + ...nsorflow.train.-r-m-s-prop-optimizer.pbtxt | 46 + .../golden/tensorflow.train.-saver-def.pbtxt | 120 + .../api/golden/tensorflow.train.-saver.pbtxt | 53 + .../golden/tensorflow.train.-scaffold.pbtxt | 49 + ...nsorflow.train.-second-or-step-timer.pbtxt | 21 + .../tensorflow.train.-sequence-example.pbtxt | 84 + .../golden/tensorflow.train.-server-def.pbtxt | 96 + .../api/golden/tensorflow.train.-server.pbtxt | 29 + .../tensorflow.train.-session-creator.pbtxt | 12 + .../tensorflow.train.-session-manager.pbtxt | 21 + .../tensorflow.train.-session-run-args.pbtxt | 27 + ...ensorflow.train.-session-run-context.pbtxt | 25 + .../tensorflow.train.-session-run-hook.pbtxt | 28 + ...tensorflow.train.-session-run-values.pbtxt | 27 + ...ow.train.-singular-monitored-session.pbtxt | 30 + .../tensorflow.train.-step-counter-hook.pbtxt | 30 + .../tensorflow.train.-stop-at-step-hook.pbtxt | 30 + ...tensorflow.train.-summary-saver-hook.pbtxt | 30 + .../golden/tensorflow.train.-supervisor.pbtxt | 153 ++ ...rflow.train.-sync-replicas-optimizer.pbtxt | 58 + ...orflow.train.-worker-session-creator.pbtxt | 14 + .../tools/api/golden/tensorflow.train.pbtxt | 395 ++++ ...low.train.queue_runner.-queue-runner.pbtxt | 49 + .../tensorflow.train.queue_runner.pbtxt | 15 + ...sorflow.truncated_normal_initializer.pbtxt | 10 + ...low.uniform_unit_scaling_initializer.pbtxt | 10 + .../golden/tensorflow.zeros_initializer.pbtxt | 10 + tensorflow/tools/api/lib/BUILD | 39 + tensorflow/tools/api/lib/api_objects.proto | 31 + .../api/lib/python_object_to_proto_visitor.py | 168 ++ .../tools/api/tests/API_UPDATE_WARNING.txt | 7 + tensorflow/tools/api/tests/BUILD | 43 + tensorflow/tools/api/tests/README.txt | 13 + .../tools/api/tests/api_compatibility_test.py | 238 ++ tensorflow/tools/ci_build/pylintrc | 4 +- tensorflow/tools/common/public_api.py | 3 +- tensorflow/tools/docs/BUILD | 14 + tensorflow/tools/docs/build_docs_test.py | 52 + 305 files changed, 16815 insertions(+), 1227 deletions(-) create mode 100644 tensorflow/compiler/xla/test.h create mode 100644 tensorflow/contrib/training/python/training/python_input.py create mode 100644 tensorflow/contrib/training/python/training/python_input_test.py create mode 100644 tensorflow/python/kernel_tests/aggregate_ops_test.py create mode 100644 tensorflow/tensorboard/components/tf_dashboard_common/dashboard-behavior.ts delete mode 100644 tensorflow/tensorboard/plugins/debugger/BUILD delete mode 100644 tensorflow/tensorboard/plugins/debugger/debugger_plugin.py delete mode 100644 tensorflow/tensorboard/plugins/debugger/debugger_plugin_test.py create mode 100644 tensorflow/tools/api/golden/BUILD create mode 100644 tensorflow/tools/api/golden/tensorflow.-aggregation-method.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-attr-value.-list-value.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-attr-value.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-auto-parallel-options.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-conditional-accumulator-base.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-conditional-accumulator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-config-proto.-device-count-entry.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-config-proto.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-d-type.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-device-spec.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-dimension.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-event.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-f-i-f-o-queue.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-fixed-len-feature.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-fixed-len-sequence-feature.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-fixed-length-record-reader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-g-p-u-options.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-graph-def.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-graph-keys.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-graph-options.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-graph.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-histogram-proto.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-identity-reader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-indexed-slices.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-interactive-session.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-log-message.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-name-attr-list.-attr-entry.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-name-attr-list.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-node-def.-attr-entry.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-node-def.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-op-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-operation.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-optimizer-options.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-padding-f-i-f-o-queue.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-priority-queue.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-queue-base.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-random-shuffle-queue.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-reader-base.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-register-gradient.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-rewriter-config.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-run-metadata.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-run-options.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-session-log.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-session.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-sparse-conditional-accumulator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-sparse-feature.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-sparse-tensor-value.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-sparse-tensor.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-summary.-audio.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-summary.-image.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-summary.-value.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-summary.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-t-f-record-reader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-tensor-array.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-tensor-info.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-tensor-shape.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-tensor.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-text-line-reader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-var-len-feature.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-variable-scope.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-variable.-save-slice-info.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-variable.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.-whole-file-reader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.app.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.compat.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.constant_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-aborted-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-already-exists-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-cancelled-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-data-loss-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-deadline-exceeded-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-failed-precondition-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-internal-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-invalid-argument-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-not-found-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-op-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-out-of-range-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-permission-denied-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-resource-exhausted-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-unauthenticated-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-unavailable-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-unimplemented-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.-unknown-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.errors.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.-estimator-spec.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.-estimator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.-mode-keys.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.__metaclass__.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.__metaclass__.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.__metaclass__.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.__metaclass__.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.-serving-input-receiver.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.export.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.inputs.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.estimator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.gfile.-fast-g-file.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.gfile.-g-file.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.gfile.-open.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.gfile.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.graph_util.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.image.-resize-method.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.image.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.layers.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.logging.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.losses.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.metrics.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.nn.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.ones_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.orthogonal_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-compression-type.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-options.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-writer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.python_io.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.random_normal_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.random_uniform_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.resource_loader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.builder.-saved-model-builder.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.builder.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.constants.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.loader.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.main_op.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.signature_constants.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.signature_def_utils.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.tag_constants.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.saved_model.utils.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.sdca.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.sets.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.spectral.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-event.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-file-writer-cache.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-file-writer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-session-log.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-summary-description.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-summary.-audio.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-summary.-image.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-summary.-value.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-summary.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.-tagged-run-metadata.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.summary.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.sysconfig.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.test.-benchmark.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.test.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-adadelta-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-adagrad-d-a-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-adagrad-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-adam-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-bytes-list.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-listener.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-chief-session-creator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-cluster-def.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-cluster-spec.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-coordinator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-example.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-exponential-moving-average.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-feature-list.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-feature-lists.-feature-list-entry.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-feature-lists.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-feature.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-features.-feature-entry.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-features.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-feed-fn-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-final-ops-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-float-list.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-ftrl-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-global-step-waiter-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-gradient-descent-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-int64-list.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-job-def.-tasks-entry.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-job-def.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-logging-tensor-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-looper-thread.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-momentum-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-monitored-session.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-nan-loss-during-training-error.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-nan-tensor-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-proximal-adagrad-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-proximal-gradient-descent-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-queue-runner.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-r-m-s-prop-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-saver-def.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-saver.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-scaffold.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-second-or-step-timer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-sequence-example.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-server-def.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-server.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-session-creator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-session-manager.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-session-run-args.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-session-run-context.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-session-run-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-session-run-values.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-singular-monitored-session.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-step-counter-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-stop-at-step-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-summary-saver-hook.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-supervisor.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-sync-replicas-optimizer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.-worker-session-creator.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.queue_runner.-queue-runner.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.train.queue_runner.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.truncated_normal_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.uniform_unit_scaling_initializer.pbtxt create mode 100644 tensorflow/tools/api/golden/tensorflow.zeros_initializer.pbtxt create mode 100644 tensorflow/tools/api/lib/BUILD create mode 100644 tensorflow/tools/api/lib/api_objects.proto create mode 100644 tensorflow/tools/api/lib/python_object_to_proto_visitor.py create mode 100644 tensorflow/tools/api/tests/API_UPDATE_WARNING.txt create mode 100644 tensorflow/tools/api/tests/BUILD create mode 100644 tensorflow/tools/api/tests/README.txt create mode 100644 tensorflow/tools/api/tests/api_compatibility_test.py create mode 100644 tensorflow/tools/docs/build_docs_test.py diff --git a/tensorflow/BUILD b/tensorflow/BUILD index b98be57ec08..1ab6ee2dc0e 100644 --- a/tensorflow/BUILD +++ b/tensorflow/BUILD @@ -308,10 +308,12 @@ filegroup( "//tensorflow/tensorboard/components/vz_sorting/test:all_files", "//tensorflow/tensorboard/lib:all_files", "//tensorflow/tensorboard/plugins:all_files", - "//tensorflow/tensorboard/plugins/debugger:all_files", "//tensorflow/tensorboard/plugins/projector:all_files", "//tensorflow/tensorboard/plugins/text:all_files", "//tensorflow/tensorboard/scripts:all_files", + "//tensorflow/tools/api/golden:all_files", + "//tensorflow/tools/api/lib:all_files", + "//tensorflow/tools/api/tests:all_files", "//tensorflow/tools/common:all_files", "//tensorflow/tools/compatibility:all_files", "//tensorflow/tools/dist_test/server:all_files", @@ -346,6 +348,11 @@ filegroup( ), ) +filegroup( + name = "docs_src", + data = glob(["docs_src/**/*.md"]), +) + # ------------------------------------------- # New rules should be added above this target. # ------------------------------------------- diff --git a/tensorflow/compiler/xla/BUILD b/tensorflow/compiler/xla/BUILD index 35c0efb8f0f..26ad8ac5f8f 100644 --- a/tensorflow/compiler/xla/BUILD +++ b/tensorflow/compiler/xla/BUILD @@ -44,6 +44,17 @@ xla_proto_library( ], ) +cc_library( + name = "test", + testonly = 1, + hdrs = ["test.h"], + visibility = [":friends"], + deps = [ + "//tensorflow/core:lib_internal", + "//tensorflow/core:test", + ], +) + cc_library( name = "types", hdrs = ["types.h"], @@ -256,10 +267,9 @@ cc_test( ":array4d", ":literal_util", ":shape_util", - ":test_helpers", + ":test", ":types", "//tensorflow/core:lib", - "//tensorflow/core:test", "//tensorflow/core:test_main", ], ) diff --git a/tensorflow/compiler/xla/literal_util_test.cc b/tensorflow/compiler/xla/literal_util_test.cc index e53763376bf..91971c3e24c 100644 --- a/tensorflow/compiler/xla/literal_util_test.cc +++ b/tensorflow/compiler/xla/literal_util_test.cc @@ -21,14 +21,16 @@ limitations under the License. #include "tensorflow/compiler/xla/array4d.h" #include "tensorflow/compiler/xla/layout_util.h" #include "tensorflow/compiler/xla/shape_util.h" -#include "tensorflow/compiler/xla/test_helpers.h" +#include "tensorflow/compiler/xla/test.h" #include "tensorflow/compiler/xla/types.h" -#include "tensorflow/core/platform/test.h" #include "tensorflow/core/platform/types.h" namespace xla { namespace { +using ::testing::ElementsAre; +using ::testing::ElementsAreArray; + class LiteralUtilTest : public ::testing::Test { protected: LiteralUtilTest() { @@ -159,9 +161,7 @@ TEST_F(LiteralUtilTest, CreateR3FromArray3d) { // clang-format on auto literal = LiteralUtil::CreateR3FromArray3D(array_3d); - EXPECT_MATCH(testing::PBToVec( - literal->shape().dimensions()), - testing::VectorMatcher({2, 3, 2})); + EXPECT_THAT(literal->shape().dimensions(), ElementsAre(2, 3, 2)); string result = LiteralUtil::ToString(*literal); const string expected = R"(f32[2,3,2] { { { 1, 2 }, @@ -182,9 +182,7 @@ TEST_F(LiteralUtilTest, LiteralR4F32ProjectedStringifies) { {2001, 2002}, }, /*projection_p=*/1, /*projection_z=*/2); // clang-format on - EXPECT_MATCH( - testing::PBToVec(literal->shape().dimensions()), - testing::VectorMatcher({1, 2, 3, 2})); + EXPECT_THAT(literal->shape().dimensions(), ElementsAre(1, 2, 3, 2)); string result = LiteralUtil::ToString(*literal); const string expected = R"(f32[1,2,3,2] { { // i0=0 @@ -204,10 +202,8 @@ TEST_F(LiteralUtilTest, LiteralR4F32ProjectedStringifies) { } TEST_F(LiteralUtilTest, LiteralR4F32Stringifies) { - EXPECT_MATCH( - testing::PBToVec( - literal_r4_2x2x3x3_dim0major_->shape().dimensions()), - testing::VectorMatcher({2, 2, 3, 3})); + EXPECT_THAT(literal_r4_2x2x3x3_dim0major_->shape().dimensions(), + ElementsAre(2, 2, 3, 3)); string result = LiteralUtil::ToString(*literal_r4_2x2x3x3_dim0major_); const string expected = R"(f32[2,2,3,3] { { // i0=0 @@ -516,27 +512,23 @@ TEST_F(LiteralUtilTest, TestR2LinearLayout) { auto mat_dim0minor = LiteralUtil::CreateR2WithLayout( {{1, 2, 3}, {4, 5, 6}}, layout_r2_dim0minor_); EXPECT_EQ(mat_dim0minor->s32s_size(), 6); - EXPECT_MATCH(testing::PBToVec(mat_dim0minor->s32s()), - testing::VectorMatcher({1, 4, 2, 5, 3, 6})); + EXPECT_THAT(mat_dim0minor->s32s(), ElementsAre(1, 4, 2, 5, 3, 6)); // Test expected memory layout when using Relayout to row major. auto relaid_mat_to_dim0major = LiteralUtil::Relayout(*mat_dim0minor, layout_r2_dim0major_); - EXPECT_MATCH(testing::PBToVec(relaid_mat_to_dim0major->s32s()), - testing::VectorMatcher({1, 2, 3, 4, 5, 6})); + EXPECT_THAT(relaid_mat_to_dim0major->s32s(), ElementsAre(1, 2, 3, 4, 5, 6)); // Test expected memory layout of R2 created with dim0-major (row-major). auto mat_dim0major = LiteralUtil::CreateR2WithLayout( {{1, 2, 3}, {4, 5, 6}}, layout_r2_dim0major_); EXPECT_EQ(mat_dim0major->s32s_size(), 6); - EXPECT_MATCH(testing::PBToVec(mat_dim0major->s32s()), - testing::VectorMatcher({1, 2, 3, 4, 5, 6})); + EXPECT_THAT(mat_dim0major->s32s(), ElementsAre(1, 2, 3, 4, 5, 6)); // Test expected memory layout when using Relayout to column major. auto relaid_mat_to_dim0minor = LiteralUtil::Relayout(*mat_dim0major, layout_r2_dim0minor_); - EXPECT_MATCH(testing::PBToVec(relaid_mat_to_dim0minor->s32s()), - testing::VectorMatcher({1, 4, 2, 5, 3, 6})); + EXPECT_THAT(relaid_mat_to_dim0minor->s32s(), ElementsAre(1, 4, 2, 5, 3, 6)); } TEST_F(LiteralUtilTest, TestR3LinearLayout) { @@ -558,28 +550,28 @@ TEST_F(LiteralUtilTest, TestR3LinearLayout) { EXPECT_EQ(lit_dim0minor->s32s_size(), 12); std::vector expected_dim0minor{1, 7, 4, 10, 2, 8, 5, 11, 3, 9, 6, 12}; - EXPECT_MATCH(testing::PBToVec(lit_dim0minor->s32s()), - testing::VectorMatcher(expected_dim0minor)); + EXPECT_THAT(lit_dim0minor->s32s(), + testing::ElementsAreArray(expected_dim0minor)); // Test expected memory layout when using Relayout to row major. auto relaid_lit_to_dim0major = LiteralUtil::Relayout(*lit_dim0minor, layout_r3_dim0major_); std::vector expected_dim0major{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}; - EXPECT_MATCH(testing::PBToVec(relaid_lit_to_dim0major->s32s()), - testing::VectorMatcher(expected_dim0major)); + EXPECT_THAT(relaid_lit_to_dim0major->s32s(), + testing::ElementsAreArray(expected_dim0major)); // Test expected memory layout of R3 created with dim0-major (row-major). auto lit_dim0major = LiteralUtil::CreateR3FromArray3DWithLayout( arr3d, layout_r3_dim0major_); EXPECT_EQ(lit_dim0major->s32s_size(), 12); - EXPECT_MATCH(testing::PBToVec(lit_dim0major->s32s()), - testing::VectorMatcher(expected_dim0major)); + EXPECT_THAT(lit_dim0major->s32s(), + testing::ElementsAreArray(expected_dim0major)); // Test expected memory layout when using Relayout to column major. auto relaid_lit_to_dim0minor = LiteralUtil::Relayout(*lit_dim0major, layout_r3_dim0minor_); - EXPECT_MATCH(testing::PBToVec(relaid_lit_to_dim0minor->s32s()), - testing::VectorMatcher(expected_dim0minor)); + EXPECT_THAT(relaid_lit_to_dim0minor->s32s(), + testing::ElementsAreArray(expected_dim0minor)); } TEST_F(LiteralUtilTest, SliceR0S32) { diff --git a/tensorflow/compiler/xla/service/BUILD b/tensorflow/compiler/xla/service/BUILD index 695e4e7f079..fc69d8eb3a6 100644 --- a/tensorflow/compiler/xla/service/BUILD +++ b/tensorflow/compiler/xla/service/BUILD @@ -1431,7 +1431,9 @@ cc_library( deps = [ ":hlo", ":hlo_pass", + "//tensorflow/compiler/xla:shape_util", "//tensorflow/compiler/xla:status_macros", + "//tensorflow/compiler/xla:util", "//tensorflow/compiler/xla/service/gpu:ir_emission_utils", "//tensorflow/core:lib", ], @@ -1442,11 +1444,13 @@ cc_test( srcs = ["transpose_folding_test.cc"], deps = [ ":hlo", + ":shape_inference", ":transpose_folding", "//tensorflow/compiler/xla:literal_util", "//tensorflow/compiler/xla:shape_util", "//tensorflow/compiler/xla:test_helpers", "//tensorflow/compiler/xla:xla_data_proto", + "//tensorflow/compiler/xla/client:computation_builder", "//tensorflow/compiler/xla/service/gpu:ir_emission_utils", "//tensorflow/core:lib", "//tensorflow/core:test_main", diff --git a/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc b/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc index c5433d4b89d..4fbdb15e70e 100644 --- a/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc +++ b/tensorflow/compiler/xla/service/cpu/cpu_compiler.cc @@ -232,7 +232,14 @@ Status CpuCompiler::RunHloPasses(HloModule* hlo_module, pass.AddPass(); pass.AddPass(); } - pipeline.AddPass(PotentiallyImplementedAsEigenDot); + pipeline.AddPass( + [](const HloInstruction& dot, + const TransposeFolding::OperandIndices& candidate_operands) { + return PotentiallyImplementedAsEigenDot(dot) + ? candidate_operands + : TransposeFolding::OperandIndices{}; + }, + TransposeFolding::NeverFoldTranspose); pipeline.AddPass(); pipeline.AddPass(/*is_layout_sensitive=*/false); pipeline.AddPass(); diff --git a/tensorflow/compiler/xla/service/gpu/gpu_compiler.cc b/tensorflow/compiler/xla/service/gpu/gpu_compiler.cc index f692f28bd98..1d2111c427c 100644 --- a/tensorflow/compiler/xla/service/gpu/gpu_compiler.cc +++ b/tensorflow/compiler/xla/service/gpu/gpu_compiler.cc @@ -133,7 +133,13 @@ tensorflow::Status OptimizeHloModule(HloModule* hlo_module, pass.AddPass(); } pipeline.AddPass(); - pipeline.AddPass(ImplementedAsGemm); + pipeline.AddPass( + [](const HloInstruction& dot, + const TransposeFolding::OperandIndices& candidate_operands) { + return ImplementedAsGemm(dot) ? candidate_operands + : TransposeFolding::OperandIndices{}; + }, + TransposeFolding::NeverFoldTranspose); pipeline.AddPass(); pipeline.AddPass(/*is_layout_sensitive=*/false); pipeline.AddPass(); diff --git a/tensorflow/compiler/xla/service/gpu/ir_emission_utils.h b/tensorflow/compiler/xla/service/gpu/ir_emission_utils.h index 4d3e9b10b2e..e8c68a6ef72 100644 --- a/tensorflow/compiler/xla/service/gpu/ir_emission_utils.h +++ b/tensorflow/compiler/xla/service/gpu/ir_emission_utils.h @@ -25,16 +25,7 @@ limitations under the License. namespace xla { namespace gpu { -const int64 kWarpSize = 32; - -// Precondition: "hlo" is an operand of a Dot instruction. -// -// Returns whether "hlo" is foldable to its user. -bool IsOperandFoldableToDot(const HloInstruction& hlo); - -// Returns true if GpuCompiler can fold any operands of "dot" into "dot" for -// better performance. -bool CanFoldOperandsIntoDot(const HloInstruction& dot); +constexpr int64 kWarpSize = 32; // Returns true if `hlo` will be implemented as a call to BLAS gemm. bool ImplementedAsGemm(const HloInstruction& hlo); diff --git a/tensorflow/compiler/xla/service/transpose_folding.cc b/tensorflow/compiler/xla/service/transpose_folding.cc index 07e0ce89f6a..cfb90e6e1d4 100644 --- a/tensorflow/compiler/xla/service/transpose_folding.cc +++ b/tensorflow/compiler/xla/service/transpose_folding.cc @@ -21,7 +21,9 @@ limitations under the License. #include "tensorflow/compiler/xla/service/gpu/ir_emission_utils.h" #include "tensorflow/compiler/xla/service/hlo_computation.h" #include "tensorflow/compiler/xla/service/hlo_instruction.h" +#include "tensorflow/compiler/xla/shape_util.h" #include "tensorflow/compiler/xla/status_macros.h" +#include "tensorflow/compiler/xla/util.h" #include "tensorflow/core/lib/core/errors.h" #include "tensorflow/core/lib/core/status.h" #include "tensorflow/core/platform/logging.h" @@ -30,43 +32,56 @@ namespace xla { namespace { -bool IsOperandFoldableToDot(const HloInstruction& hlo) { - return hlo.IsRank2Transpose() && - hlo.user_count() == 1; // The dot is its only user. -} - -bool CanFoldOperandsIntoDot( +TransposeFolding::OperandIndices CanFoldOperandsIntoDot( const HloInstruction& dot, - const TransposeFolding::IsTransposableGemmFn& is_transposable_gemm) { + const TransposeFolding::TransposableGemmOperandsFn& + transposable_gemm_operands) { if (HloOpcode::kDot != dot.opcode()) { - return false; + return {}; } - if (!is_transposable_gemm(dot)) { - return false; + TransposeFolding::OperandIndices operand_set; + for (int64 i = 0; i < dot.operand_count(); ++i) { + auto& operand = *dot.operand(i); + if (operand.IsRank2Transpose() && operand.user_count() == 1) { + operand_set.push_back(i); + } } - const HloInstruction* lhs = dot.operand(0); - const HloInstruction* rhs = dot.operand(1); - bool lhs_foldable = IsOperandFoldableToDot(*lhs); - bool rhs_foldable = IsOperandFoldableToDot(*rhs); - if (!lhs_foldable && !rhs_foldable) { - return false; - } - return true; + return transposable_gemm_operands(dot, operand_set); } +TransposeFolding::OperandIndices CanFoldOperandsIntoConvolution( + const HloInstruction& convolution, + const TransposeFolding::TransposableConvOperandsFn& + transposable_conv_operands) { + if (HloOpcode::kConvolution != convolution.opcode()) { + return {}; + } + + // We only support folding the RHS. + const int64 kRhsOperandIndex = 1; + auto& operand = *convolution.operand(kRhsOperandIndex); + if (operand.opcode() == HloOpcode::kTranspose && operand.user_count() == 1) { + return transposable_conv_operands(convolution, {kRhsOperandIndex}); + } + + return {}; +} + +using InstructionOperandsPair = + std::pair; + // Folds the operands of `dot` that are foldable transposes. `computation` is -// the parent HLO computation of `dot`. `module` is the parent HloModule of -// `computation`. +// the parent HLO computation of `dot`. // // Returns whether the module is changed. -bool FoldTransposeIntoDot(HloInstruction* dot, HloComputation* computation) { +bool FoldTransposeIntoDot(InstructionOperandsPair pair, + HloComputation* computation) { + auto* dot = pair.first; std::vector instructions_to_fuse(1, dot); - for (HloInstruction* operand : dot->operands()) { - if (IsOperandFoldableToDot(*operand)) { - instructions_to_fuse.push_back(operand); - } + for (const int64 operand_index : pair.second) { + instructions_to_fuse.push_back(dot->mutable_operand(operand_index)); } // Early-exit if no operands are foldable. @@ -79,28 +94,95 @@ bool FoldTransposeIntoDot(HloInstruction* dot, HloComputation* computation) { return true; } +// Folds the operands of `convolution` that are foldable transposes. +// `computation` is the parent HLO computation of `convolution`. +// +// Returns whether the module is changed. +bool FoldTransposeIntoConvolution(InstructionOperandsPair pair, + HloComputation* computation) { + auto& convolution = *pair.first; + + // We only support fusing the RHS transpose into convolution. + // + // ConvolutionDimensionNumbers doesn't make enough of a distinction between + // the output and the activations. + // + // TODO(b/37125184): Support transposing the LHS too. + if (pair.second.size() != 1 || pair.second.front() != 1) { + return false; + } + + const ConvolutionDimensionNumbers& dnums = + convolution.convolution_dimension_numbers(); + HloInstruction& transpose = *convolution.mutable_operand(1); + CHECK_EQ(transpose.opcode(), HloOpcode::kTranspose); + const auto& transpose_dimensions = transpose.dimensions(); + HloInstruction& transpose_operand = *transpose.mutable_operand(0); + + // Everything remains the same except for the kernel dimension numbers. We + // need to apply the transpose permutation to the original shape to figure out + // what the new logical dimensions are. + ConvolutionDimensionNumbers new_dnums = dnums; + new_dnums.set_kernel_input_feature_dimension( + transpose_dimensions[dnums.kernel_input_feature_dimension()]); + new_dnums.set_kernel_output_feature_dimension( + transpose_dimensions[dnums.kernel_output_feature_dimension()]); + for (auto& kernel_spatial_dimension : + *new_dnums.mutable_kernel_spatial_dimensions()) { + kernel_spatial_dimension = transpose_dimensions[kernel_spatial_dimension]; + } + + auto new_conv = HloInstruction::CreateConvolve( + convolution.shape(), convolution.mutable_operand(0), &transpose_operand, + convolution.window(), new_dnums); + TF_CHECK_OK(computation->ReplaceWithNewInstruction(&convolution, + std::move(new_conv))); + + return true; +} + } // namespace -TransposeFolding::TransposeFolding(IsTransposableGemmFn is_transposable_gemm) - : is_transposable_gemm_(std::move(is_transposable_gemm)) {} +TransposeFolding::TransposeFolding( + TransposableGemmOperandsFn transposable_gemm_operands, + TransposableConvOperandsFn transposable_conv_operands) + : transposable_gemm_operands_(std::move(transposable_gemm_operands)), + transposable_conv_operands_(std::move(transposable_conv_operands)) {} StatusOr TransposeFolding::Run(HloModule* module) { // Modifying the graph while traversing is dangerous, so we find all folding // opportunities before actually folding them. HloComputation* entry_computation = module->entry_computation(); - std::vector foldable_dots; - auto visit_fn = [this, &foldable_dots](HloInstruction* instruction) { - if (CanFoldOperandsIntoDot(*instruction, is_transposable_gemm_)) { - foldable_dots.emplace_back(instruction); + std::vector> foldable_dots; + std::vector> foldable_convolutions; + auto visit_fn = [this, &foldable_dots, + &foldable_convolutions](HloInstruction* instruction) { + { + OperandIndices operand_indices = + CanFoldOperandsIntoDot(*instruction, transposable_gemm_operands_); + if (!operand_indices.empty()) { + foldable_dots.emplace_back(instruction, operand_indices); + } + } + { + OperandIndices operand_indices = CanFoldOperandsIntoConvolution( + *instruction, transposable_conv_operands_); + if (!operand_indices.empty()) { + foldable_convolutions.emplace_back( + std::make_pair(instruction, operand_indices)); + } } return tensorflow::Status::OK(); }; TF_RETURN_IF_ERROR(entry_computation->root_instruction()->Accept(visit_fn)); bool changed = false; - for (HloInstruction* dot : foldable_dots) { - changed |= FoldTransposeIntoDot(dot, entry_computation); + for (InstructionOperandsPair& pair : foldable_dots) { + changed |= FoldTransposeIntoDot(pair, entry_computation); + } + for (InstructionOperandsPair& pair : foldable_convolutions) { + changed |= FoldTransposeIntoConvolution(pair, entry_computation); } return changed; } diff --git a/tensorflow/compiler/xla/service/transpose_folding.h b/tensorflow/compiler/xla/service/transpose_folding.h index d857c04ed8d..71e8446452f 100644 --- a/tensorflow/compiler/xla/service/transpose_folding.h +++ b/tensorflow/compiler/xla/service/transpose_folding.h @@ -25,16 +25,37 @@ namespace xla { // operator is implemented by a GEMM kernel that can transpose its inputs. class TransposeFolding : public HloPassInterface { public: - // IsTransposableGemmFn should return true iff the instruction argument is - // implemented as a GEMM kernel that supports transposing its arguments. - typedef std::function IsTransposableGemmFn; - explicit TransposeFolding(IsTransposableGemmFn is_transposable_gemm); + using OperandIndices = std::vector; + + // Returns the set of foldable operands for a given HLO and some candidate + // operands. + using FoldableOperands = std::function; + using TransposableGemmOperandsFn = FoldableOperands; + using TransposableConvOperandsFn = FoldableOperands; + + // Helper function to explicitly not fold transposes. + static OperandIndices NeverFoldTranspose(const HloInstruction&, + const OperandIndices&) { + return {}; + } + // transposable_gemm_operands returns the set of operands it wants to fold if + // the instruction argument is implemented as a GEMM kernel that supports + // transposing its arguments. + // + // transposable_conv_operands returns the set of operands it wants to fold if + // the instruction argument is implemented as a convolution that supports + // transposing its arguments. + explicit TransposeFolding( + TransposableGemmOperandsFn transposable_gemm_operands, + TransposableConvOperandsFn transposable_conv_operands); tensorflow::StringPiece name() const override { return "transpose-folding"; } StatusOr Run(HloModule* module) override; private: - IsTransposableGemmFn is_transposable_gemm_; + TransposableGemmOperandsFn transposable_gemm_operands_; + TransposableConvOperandsFn transposable_conv_operands_; }; } // namespace xla diff --git a/tensorflow/compiler/xla/service/transpose_folding_test.cc b/tensorflow/compiler/xla/service/transpose_folding_test.cc index 09f932e29e6..ea5ab2b9171 100644 --- a/tensorflow/compiler/xla/service/transpose_folding_test.cc +++ b/tensorflow/compiler/xla/service/transpose_folding_test.cc @@ -16,15 +16,17 @@ limitations under the License. #include "tensorflow/compiler/xla/service/transpose_folding.h" #include -#include +#include #include +#include "tensorflow/compiler/xla/client/computation_builder.h" #include "tensorflow/compiler/xla/literal_util.h" #include "tensorflow/compiler/xla/service/gpu/ir_emission_utils.h" #include "tensorflow/compiler/xla/service/hlo_computation.h" #include "tensorflow/compiler/xla/service/hlo_instruction.h" #include "tensorflow/compiler/xla/service/hlo_module.h" #include "tensorflow/compiler/xla/service/hlo_opcode.h" +#include "tensorflow/compiler/xla/service/shape_inference.h" #include "tensorflow/compiler/xla/shape_util.h" #include "tensorflow/compiler/xla/test_helpers.h" #include "tensorflow/compiler/xla/xla_data.pb.h" @@ -35,12 +37,22 @@ namespace xla { class TransposeFoldingTest : public ::testing::Test { protected: void FoldTranspose(HloModule* module) { - TransposeFolding transpose_folding(gpu::ImplementedAsGemm); + TransposeFolding transpose_folding( + [](const HloInstruction& dot, + const TransposeFolding::OperandIndices& candidate_operands) { + return gpu::ImplementedAsGemm(dot) + ? candidate_operands + : TransposeFolding::OperandIndices{}; + }, + [](const HloInstruction& convolution, + const TransposeFolding::OperandIndices& candidate_operands) { + return candidate_operands; + }); EXPECT_IS_OK(transpose_folding.Run(module).status()); } }; -TEST_F(TransposeFoldingTest, FoldTranspose) { +TEST_F(TransposeFoldingTest, FoldDotTranspose) { auto builder = HloComputation::Builder("entry_computation"); HloInstruction* x = builder.AddInstruction(HloInstruction::CreateParameter( /*parameter_number=*/0, ShapeUtil::MakeShape(F32, {2, 3}), @@ -61,7 +73,7 @@ TEST_F(TransposeFoldingTest, FoldTranspose) { FoldTranspose(&module); // Instructions after folding: x, y, and the fusion. - std::set instruction_set; + std::unordered_set instruction_set; for (auto& instruction : entry_computation->instructions()) { instruction_set.insert(instruction.get()); } @@ -77,7 +89,7 @@ TEST_F(TransposeFoldingTest, FoldTranspose) { EXPECT_EQ(4, fusion->fused_instructions().size()); } -TEST_F(TransposeFoldingTest, FoldTransposeConstant) { +TEST_F(TransposeFoldingTest, FoldDotTransposeConstant) { auto builder = HloComputation::Builder("entry_computation"); // 2x1 HloInstruction* const0 = builder.AddInstruction( @@ -115,7 +127,7 @@ TEST_F(TransposeFoldingTest, FoldTransposeConstant) { entry_computation->root_instruction()->fused_instructions().size()); } -TEST_F(TransposeFoldingTest, FuseWithConstantOperands) { +TEST_F(TransposeFoldingTest, FuseDotWithConstantOperands) { auto builder = HloComputation::Builder("entry"); // (1.0 + 2.0) * (2.0 - 3.0) HloInstruction* const1 = builder.AddInstruction( @@ -146,4 +158,168 @@ TEST_F(TransposeFoldingTest, FuseWithConstantOperands) { EXPECT_EQ(6, callee_computation->instructions().size()); } +// Test that a two dimension swap of the kernel gets folded into convolution. +TEST_F(TransposeFoldingTest, FoldConvDimSwapTransposeRhs) { + auto builder = HloComputation::Builder("entry_computation"); + HloInstruction* x = builder.AddInstruction(HloInstruction::CreateParameter( + /*parameter_number=*/0, ShapeUtil::MakeShape(F32, {2, 3, 1, 1}), + /*name=*/"x")); + HloInstruction* y = builder.AddInstruction(HloInstruction::CreateParameter( + /*parameter_number=*/1, ShapeUtil::MakeShape(F32, {3, 2, 1, 1}), + /*name=*/"y")); + HloInstruction* transpose_y = + builder.AddInstruction(HloInstruction::CreateTranspose( + ShapeUtil::MakeShape(F32, {2, 3, 1, 1}), y, {1, 0, 2, 3})); + auto dnums = ComputationBuilder::CreateDefaultConvDimensionNumbers(); + Window window; + for (int i = 0; i < 2; ++i) { + WindowDimension* dim = window.add_dimensions(); + dim->set_padding_low(0); + dim->set_padding_high(0); + dim->set_base_dilation(1); + dim->set_window_dilation(1); + dim->set_stride(1); + dim->set_size( + transpose_y->shape().dimensions(dnums.kernel_spatial_dimensions(i))); + } + StatusOr conv_shape = ShapeInference::InferConvolveShape( + x->shape(), transpose_y->shape(), window, dnums); + EXPECT_IS_OK(conv_shape); + HloInstruction* conv = builder.AddInstruction(HloInstruction::CreateConvolve( + conv_shape.ValueOrDie(), x, transpose_y, window, dnums)); + + HloModule module("test_module"); + HloComputation* entry_computation = + module.AddEntryComputation(builder.Build(conv)); + FoldTranspose(&module); + + // Instructions after folding: x, y, and the convolution. + std::unordered_set instruction_set; + for (auto& instruction : entry_computation->instructions()) { + instruction_set.insert(instruction.get()); + } + CHECK_EQ(1, instruction_set.erase(x)) << "x is not in entry_computation."; + CHECK_EQ(1, instruction_set.erase(y)) << "y is not in entry_computation."; + CHECK_EQ(1, instruction_set.size()) + << "entry_computation should contain exactly 3 instructions."; + HloInstruction* new_conv = *instruction_set.begin(); + EXPECT_EQ(HloOpcode::kConvolution, new_conv->opcode()); + EXPECT_EQ(dnums.kernel_input_feature_dimension(), + new_conv->convolution_dimension_numbers() + .kernel_output_feature_dimension()); + EXPECT_EQ(dnums.kernel_output_feature_dimension(), + new_conv->convolution_dimension_numbers() + .kernel_input_feature_dimension()); +} + +// Test that a complex transpose of the kernel gets folded into convolution. +TEST_F(TransposeFoldingTest, FoldConvComplexTransposeRhs) { + auto builder = HloComputation::Builder("entry_computation"); + HloInstruction* x = builder.AddInstruction(HloInstruction::CreateParameter( + /*parameter_number=*/0, ShapeUtil::MakeShape(F32, {2, 3, 1, 1}), + /*name=*/"x")); + HloInstruction* y = builder.AddInstruction(HloInstruction::CreateParameter( + /*parameter_number=*/1, ShapeUtil::MakeShape(F32, {1, 2, 1, 3}), + /*name=*/"y")); + HloInstruction* transpose_y = + builder.AddInstruction(HloInstruction::CreateTranspose( + ShapeUtil::MakeShape(F32, {2, 3, 1, 1}), y, {1, 3, 0, 2})); + auto dnums = ComputationBuilder::CreateDefaultConvDimensionNumbers(); + Window window; + for (int i = 0; i < 2; ++i) { + WindowDimension* dim = window.add_dimensions(); + dim->set_padding_low(0); + dim->set_padding_high(0); + dim->set_base_dilation(1); + dim->set_window_dilation(1); + dim->set_stride(1); + dim->set_size( + transpose_y->shape().dimensions(dnums.kernel_spatial_dimensions(i))); + } + StatusOr conv_shape = ShapeInference::InferConvolveShape( + x->shape(), transpose_y->shape(), window, dnums); + EXPECT_IS_OK(conv_shape); + HloInstruction* conv = builder.AddInstruction(HloInstruction::CreateConvolve( + conv_shape.ValueOrDie(), x, transpose_y, window, dnums)); + + HloModule module("test_module"); + HloComputation* entry_computation = + module.AddEntryComputation(builder.Build(conv)); + FoldTranspose(&module); + + // Instructions after folding: x, y, and the convolution. + std::unordered_set instruction_set; + for (auto& instruction : entry_computation->instructions()) { + instruction_set.insert(instruction.get()); + } + CHECK_EQ(1, instruction_set.erase(x)) << "x is not in entry_computation."; + CHECK_EQ(1, instruction_set.erase(y)) << "y is not in entry_computation."; + CHECK_EQ(1, instruction_set.size()) + << "entry_computation should contain exactly 3 instructions."; + HloInstruction* new_conv = *instruction_set.begin(); + EXPECT_EQ(HloOpcode::kConvolution, new_conv->opcode()); + EXPECT_EQ(dnums.kernel_input_feature_dimension(), + new_conv->convolution_dimension_numbers() + .kernel_output_feature_dimension()); + EXPECT_EQ(dnums.kernel_spatial_dimensions(1), + new_conv->convolution_dimension_numbers() + .kernel_input_feature_dimension()); + EXPECT_EQ( + dnums.kernel_output_feature_dimension(), + new_conv->convolution_dimension_numbers().kernel_spatial_dimensions(0)); + EXPECT_EQ( + dnums.kernel_spatial_dimensions(0), + new_conv->convolution_dimension_numbers().kernel_spatial_dimensions(1)); +} + +// Test that a transpose of the activations does not get folded into +// convolution. +TEST_F(TransposeFoldingTest, FoldConvTransposeLhs) { + auto builder = HloComputation::Builder("entry_computation"); + HloInstruction* x = builder.AddInstruction(HloInstruction::CreateParameter( + /*parameter_number=*/0, ShapeUtil::MakeShape(F32, {3, 2, 1, 1}), + /*name=*/"x")); + HloInstruction* y = builder.AddInstruction(HloInstruction::CreateParameter( + /*parameter_number=*/1, ShapeUtil::MakeShape(F32, {2, 3, 1, 1}), + /*name=*/"y")); + HloInstruction* transpose_x = + builder.AddInstruction(HloInstruction::CreateTranspose( + ShapeUtil::MakeShape(F32, {2, 3, 1, 1}), x, {1, 0, 2, 3})); + auto dnums = ComputationBuilder::CreateDefaultConvDimensionNumbers(); + Window window; + for (int i = 0; i < 2; ++i) { + WindowDimension* dim = window.add_dimensions(); + dim->set_padding_low(0); + dim->set_padding_high(0); + dim->set_base_dilation(1); + dim->set_window_dilation(1); + dim->set_stride(1); + dim->set_size(y->shape().dimensions(dnums.kernel_spatial_dimensions(i))); + } + StatusOr conv_shape = ShapeInference::InferConvolveShape( + transpose_x->shape(), y->shape(), window, dnums); + EXPECT_IS_OK(conv_shape); + HloInstruction* conv = builder.AddInstruction(HloInstruction::CreateConvolve( + conv_shape.ValueOrDie(), transpose_x, y, window, dnums)); + + HloModule module("test_module"); + HloComputation* entry_computation = + module.AddEntryComputation(builder.Build(conv)); + FoldTranspose(&module); + + // Instructions after folding: transpose_x, y, and the convolution. + std::unordered_set instruction_set; + for (auto& instruction : entry_computation->instructions()) { + instruction_set.insert(instruction.get()); + } + CHECK_EQ(1, instruction_set.erase(x)) << "x is not in entry_computation."; + CHECK_EQ(1, instruction_set.erase(y)) << "y is not in entry_computation."; + CHECK_EQ(1, instruction_set.erase(transpose_x)) + << "transpose_x is not in entry_computation."; + CHECK_EQ(1, instruction_set.erase(conv)) + << "transpose_x is not in entry_computation."; + CHECK_EQ(0, instruction_set.size()) + << "entry_computation should contain exactly 4 instructions."; +} + } // namespace xla diff --git a/tensorflow/compiler/xla/test.h b/tensorflow/compiler/xla/test.h new file mode 100644 index 00000000000..87a8c5f3a52 --- /dev/null +++ b/tensorflow/compiler/xla/test.h @@ -0,0 +1,48 @@ +/* Copyright 2017 The TensorFlow Authors. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +==============================================================================*/ + +#ifndef TENSORFLOW_COMPLIER_XLA_TEST_H_ +#define TENSORFLOW_COMPLIER_XLA_TEST_H_ + +// This header includes gmock.h and enables the use of gmock matchers in tests +// in third_party/tensorflow/compiler/xla. +// +// Test including this header can use the macros EXPECT_THAT(...) and +// ASSERT_THAT(...) in combination with gmock matchers. +// Example: +// std::vector vec = Foo(); +// EXPECT_THAT(vec, ::testing::ElementsAre(1,2,3)); +// +// For more details on gmock matchers see: +// https://github.com/google/googletest/blob/master/googlemock/docs/CheatSheet.md#matchers +// +// The advantages of using gmock matchers instead of self defined matchers are +// better error messages, more maintainable tests and more test coverage. +// +// Note that while the use of gmock matchers is allowed in the xla project, the +// use of mocks is disallowed in the whole tensorflow project! + +#include "tensorflow/core/platform/platform.h" + +#if defined(PLATFORM_GOOGLE) || defined(PLATFORM_GOOGLE_ANDROID) +#include "testing/base/public/gmock.h" +#else +#include +#include +#endif + +#include "tensorflow/core/platform/test.h" + +#endif // TENSORFLOW_COMPLIER_XLA_TEST_H_ diff --git a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer.h b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer.h index 22ec5349f86..5e316538cef 100644 --- a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer.h +++ b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer.h @@ -55,7 +55,7 @@ class WeightedQuantilesBuffer { : max_size_(std::min(block_size << 1, max_elements)) { QCHECK(max_size_ > 0) << "Invalid buffer specification: (" << block_size << ", " << max_elements << ")"; - map_.reserve(max_size_); + vec_.reserve(max_size_); } // Disallow copying as it's semantically non-sensical in the Squawd algorithm @@ -77,42 +77,48 @@ class WeightedQuantilesBuffer { return; } - // Insert entry to map if not already present else - // accumulate the new weight. - auto result = map_.insert(BufferMapEntry(value, weight)); - if (!result.second) { - result.first->second += weight; - } + // Push back the entry to the buffer. + vec_.push_back(BufferEntry(value, weight)); } - // Returns a sorted vector view of the base buffer. Callers should - // minimize how often this is called, ideally only right after the buffer - // becomes full. - std::vector GenerateEntryList() const { + // Returns a sorted vector view of the base buffer and clears the buffer. + // Callers should minimize how often this is called, ideally only right after + // the buffer becomes full. + std::vector GenerateEntryList() { std::vector ret; - ret.reserve(map_.size()); - std::transform(map_.begin(), map_.end(), std::back_inserter(ret), - [](const BufferMapEntry& map_entry) { - return BufferEntry(map_entry.first, map_entry.second); - }); + if (vec_.size() == 0) { + return ret; + } + ret.swap(vec_); + vec_.reserve(max_size_); std::sort(ret.begin(), ret.end()); + size_t num_entries = 0; + for (size_t i = 1; i < ret.size(); ++i) { + if (ret[i].value != ret[i - 1].value) { + BufferEntry tmp = ret[i]; + ++num_entries; + ret[num_entries] = tmp; + } else { + ret[num_entries].weight += ret[i].weight; + } + } + ret.resize(num_entries + 1); return ret; } - int64 Size() const { return map_.size(); } - bool IsFull() const { return map_.size() >= max_size_; } - void Clear() { map_.clear(); } + int64 Size() const { return vec_.size(); } + bool IsFull() const { return vec_.size() >= max_size_; } + void Clear() { vec_.clear(); } private: - using BufferMap = typename std::unordered_map; - using BufferMapEntry = typename BufferMap::value_type; + using BufferVector = typename std::vector; // Comparison function. static constexpr decltype(CompareFn()) kCompFn = CompareFn(); // Base buffer. size_t max_size_; - BufferMap map_; + BufferVector vec_; }; template diff --git a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer_test.cc b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer_test.cc index 02696fb4f18..8e403186651 100644 --- a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer_test.cc +++ b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_buffer_test.cc @@ -69,47 +69,32 @@ TEST_F(WeightedQuantilesBufferTest, PushEntryFull) { expected.emplace_back(2, 4); expected.emplace_back(5, 9); - // At this point, we have a compaction and duplicate entry 2 is merged. - EXPECT_FALSE(buffer.IsFull()); - EXPECT_EQ(buffer.GenerateEntryList(), expected); - - // Push another unique entry. - buffer.PushEntry(3, 2); + // At this point, we have pushed 4 entries and we expect the buffer to be + // full. EXPECT_TRUE(buffer.IsFull()); + EXPECT_EQ(buffer.GenerateEntryList(), expected); + EXPECT_FALSE(buffer.IsFull()); +} +TEST_F(WeightedQuantilesBufferTest, PushEntryFullDeath) { + // buffer capacity is 4. + Buffer buffer(2, 100); + buffer.PushEntry(5, 9); + buffer.PushEntry(2, 3); + buffer.PushEntry(-1, 7); + buffer.PushEntry(2, 1); + + std::vector expected; + expected.emplace_back(-1, 7); + expected.emplace_back(2, 4); + expected.emplace_back(5, 9); + + // At this point, we have pushed 4 entries and we expect the buffer to be + // full. + EXPECT_TRUE(buffer.IsFull()); // Can't push any more entries before clearing. EXPECT_DEATH(({ buffer.PushEntry(6, 6); }), "Buffer already full"); } -TEST_F(WeightedQuantilesBufferTest, RandomizedPush) { - // buffer capacity is 6. - Buffer buffer(3, 100); - std::array elements = {{1.1, 2.3, 5.1, 8.0, 12.6}}; - std::array counts; - counts.fill(0.0); - - random::PhiloxRandom philox(13); - random::SimplePhilox rand(&philox); - - for (int iters = 10000; iters-- > 0; --iters) { - // Add entry. - int32 picked_idx = rand.Uniform(elements.size()); - buffer.PushEntry(elements[picked_idx], 1.0); - ++counts[picked_idx]; - - // We can't fill buffer with a number of unique elements < capacity. - EXPECT_FALSE(buffer.IsFull()); - } - - // Ensure we didn't lose any information. - std::vector expected; - for (int i = 0; i < elements.size(); ++i) { - if (counts[i] > 0) { - expected.emplace_back(elements[i], counts[i]); - } - } - EXPECT_EQ(buffer.GenerateEntryList(), expected); -} - } // namespace } // namespace tensorflow diff --git a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_stream.h b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_stream.h index ad2358e4c43..daf0e480003 100644 --- a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_stream.h +++ b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_stream.h @@ -91,12 +91,11 @@ class WeightedQuantilesStream { // and push weighted quantile summary up the level chain. if (buffer_.IsFull()) { PushBuffer(buffer_); - buffer_.Clear(); } } // Pushes full buffer while maintaining approximation error invariants. - void PushBuffer(const Buffer& buffer) { + void PushBuffer(Buffer& buffer) { // Validate state. QCHECK(!finalized_) << "Finalize() already called."; @@ -124,7 +123,6 @@ class WeightedQuantilesStream { // Flush any remaining buffer elements. PushBuffer(buffer_); - buffer_.Clear(); // Create final merged summary. local_summary_.Clear(); diff --git a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_summary_test.cc b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_summary_test.cc index e6d10bf08b6..0bdfb406641 100644 --- a/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_summary_test.cc +++ b/tensorflow/contrib/boosted_trees/lib/quantiles/weighted_quantiles_summary_test.cc @@ -91,9 +91,10 @@ TEST_F(WeightedQuantilesSummaryTest, BuildFromBuffer) { } TEST_F(WeightedQuantilesSummaryTest, CompressSeparately) { + const auto entry_list = buffer1_->GenerateEntryList(); for (int new_size = 9; new_size >= 2; --new_size) { Summary summary; - summary.BuildFromBufferEntries(buffer1_->GenerateEntryList()); + summary.BuildFromBufferEntries(entry_list); summary.Compress(new_size); // Expect a max approximation error of 1 / n @@ -161,10 +162,12 @@ TEST_F(WeightedQuantilesSummaryTest, CompressRandomized) { TEST_F(WeightedQuantilesSummaryTest, MergeSymmetry) { // Create two separate summaries and merge. + const auto list_1 = buffer1_->GenerateEntryList(); + const auto list_2 = buffer2_->GenerateEntryList(); Summary summary1; - summary1.BuildFromBufferEntries(buffer1_->GenerateEntryList()); + summary1.BuildFromBufferEntries(list_1); Summary summary2; - summary2.BuildFromBufferEntries(buffer2_->GenerateEntryList()); + summary2.BuildFromBufferEntries(list_2); // Merge summary 2 into 1 and verify. summary1.Merge(summary2); @@ -178,7 +181,7 @@ TEST_F(WeightedQuantilesSummaryTest, MergeSymmetry) { EXPECT_EQ(summary1.Size(), 14); // 14 unique values. // Merge summary 1 into 2 and verify same result. - summary1.BuildFromBufferEntries(buffer1_->GenerateEntryList()); + summary1.BuildFromBufferEntries(list_1); summary2.Merge(summary1); EXPECT_EQ(summary2.ApproximationError(), 0.0); EXPECT_EQ(summary2.MinValue(), diff --git a/tensorflow/contrib/cmake/tf_python.cmake b/tensorflow/contrib/cmake/tf_python.cmake index 39fbf603f0e..178e03c222e 100755 --- a/tensorflow/contrib/cmake/tf_python.cmake +++ b/tensorflow/contrib/cmake/tf_python.cmake @@ -212,7 +212,6 @@ add_python_module("tensorflow/tensorboard") add_python_module("tensorflow/tensorboard/backend") add_python_module("tensorflow/tensorboard/backend/event_processing") add_python_module("tensorflow/tensorboard/plugins") -add_python_module("tensorflow/tensorboard/plugins/debugger") add_python_module("tensorflow/tensorboard/plugins/projector") add_python_module("tensorflow/tensorboard/plugins/text") add_python_module("tensorflow/tensorboard/scripts") diff --git a/tensorflow/contrib/factorization/BUILD b/tensorflow/contrib/factorization/BUILD index 7f359bea51c..8dc78da6ba3 100644 --- a/tensorflow/contrib/factorization/BUILD +++ b/tensorflow/contrib/factorization/BUILD @@ -202,6 +202,7 @@ tf_py_test( additional_deps = [ ":factorization_py", ":factorization_py_CYCLIC_DEPENDENCIES_THAT_NEED_TO_GO", + ":factorization_ops_test_utils_py", "//third_party/py/numpy", "//tensorflow/python:array_ops", "//tensorflow/python:client_testlib", diff --git a/tensorflow/contrib/factorization/python/ops/factorization_ops.py b/tensorflow/contrib/factorization/python/ops/factorization_ops.py index b853652629c..8fb33dad6fd 100644 --- a/tensorflow/contrib/factorization/python/ops/factorization_ops.py +++ b/tensorflow/contrib/factorization/python/ops/factorization_ops.py @@ -190,7 +190,8 @@ class WALSModel(object): num_col_shards=1, row_weights=1, col_weights=1, - use_factors_weights_cache=True): + use_factors_weights_cache=True, + use_gramian_cache=True): """Creates model for WALS matrix factorization. Args: @@ -224,6 +225,8 @@ class WALSModel(object): col_weights: See row_weights. use_factors_weights_cache: When True, the factors and weights will be cached on the workers before the updates start. Defaults to True. + use_gramian_cache: When True, the Gramians will be cached on the workers + before the updates start. Defaults to True. """ self._input_rows = input_rows self._input_cols = input_cols @@ -243,6 +246,7 @@ class WALSModel(object): self._num_col_shards, "col_weights") self._use_factors_weights_cache = use_factors_weights_cache + self._use_gramian_cache = use_gramian_cache self._row_factors = self._create_factors(self._input_rows, self._n_components, self._num_row_shards, row_init, @@ -495,10 +499,13 @@ class WALSModel(object): """Creates local cache of factors, weights and gramian for rows and columns. Note that currently the caching strategy is as follows: - When initiating a row(column) update, the column(row) gramian is computed - and cached while the row gramian is reset; optionally, column(row) factors - and weights are cached and row(column) factors and weights are reset when - use_factors_weights_cache is True. + When initiating a row (resp. column) update: + - The column (resp. row) gramian is computed. + - Optionally, if use_gramian_cache is True, the column (resp. row) Gramian + is cached, while the row (resp. column) gramian is reset. + - Optionally, if use_factors_weights_cache is True, the column (resp. row) + factors and weights are cached, while the row (resp. column) factors and + weights are reset. """ (self._row_factors_cache, row_factors_cache_init, @@ -515,18 +522,20 @@ class WALSModel(object): self._row_weights, "row_wt_cache", pass_through=not self._use_factors_weights_cache) - (self._col_wt_cache, col_wt_cache_init, _) = self._cached_copy( self._col_weights, "col_wt_cache", pass_through=not self._use_factors_weights_cache) - (self._row_gramian_cache, row_gramian_cache_init, row_gramian_cache_reset) = self._cached_copy( - self._row_gramian, "row_gramian_cache", pass_through=False) + self._row_gramian, + "row_gramian_cache", + pass_through=not self._use_gramian_cache) (self._col_gramian_cache, col_gramian_cache_init, col_gramian_cache_reset) = self._cached_copy( - self._col_gramian, "col_gramian_cache", pass_through=False) + self._col_gramian, + "col_gramian_cache", + pass_through=not self._use_gramian_cache) self._row_updates_init = control_flow_ops.group(col_factors_cache_init, row_factors_cache_reset, diff --git a/tensorflow/contrib/factorization/python/ops/wals.py b/tensorflow/contrib/factorization/python/ops/wals.py index 3fd2cbbec2b..41211859f18 100644 --- a/tensorflow/contrib/factorization/python/ops/wals.py +++ b/tensorflow/contrib/factorization/python/ops/wals.py @@ -18,7 +18,10 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function +from tensorflow.contrib.factorization.python.ops import factorization_ops from tensorflow.contrib.framework.python.ops import variables as framework_variables +from tensorflow.contrib.learn.python.learn.estimators import estimator +from tensorflow.contrib.learn.python.learn.estimators import model_fn from tensorflow.python.framework import dtypes from tensorflow.python.framework import ops from tensorflow.python.ops import array_ops @@ -221,3 +224,321 @@ class _SweepHook(session_run_hook.SessionRunHook): self._is_sweep_done = run_values.results[0] logging.info("Partial fit done.") + +def _wals_factorization_model_function(features, labels, mode, params): + """Model function for the WALSFactorization estimator. + + Args: + features: Dictionary of features. See WALSMatrixFactorization. + labels: Must be None. + mode: A model_fn.ModeKeys object. + params: Dictionary of parameters containing arguments passed to the + WALSMatrixFactorization constructor. + + Returns: + A ModelFnOps object. + """ + assert labels is None + use_factors_weights_cache = ( + params["use_factors_weights_cache_for_training"] + and mode == model_fn.ModeKeys.TRAIN) + use_gramian_cache = ( + params["use_gramian_cache_for_training"] + and mode == model_fn.ModeKeys.TRAIN) + model = factorization_ops.WALSModel( + params["num_rows"], + params["num_cols"], + params["embedding_dimension"], + unobserved_weight=params["unobserved_weight"], + regularization=params["regularization_coeff"], + row_init=params["row_init"], + col_init=params["col_init"], + num_row_shards=params["num_row_shards"], + num_col_shards=params["num_col_shards"], + row_weights=params["row_weights"], + col_weights=params["col_weights"], + use_factors_weights_cache=use_factors_weights_cache, + use_gramian_cache=use_gramian_cache) + + # Get input rows and cols. We either update rows or columns depending on + # the value of row_sweep, which is maintained using a session hook + input_rows = features[WALSMatrixFactorization.INPUT_ROWS] + input_cols = features[WALSMatrixFactorization.INPUT_COLS] + input_row_indices, _ = array_ops.unique(input_rows.indices[:, 0]) + input_col_indices, _ = array_ops.unique(input_cols.indices[:, 0]) + + # Train ops, controlled using the SweepHook + # We need to run the following ops: + # Before a row sweep: + # row_update_prep_gramian_op + # initialize_row_update_op + # During a row sweep: + # update_row_factors_op + # Before a col sweep: + # col_update_prep_gramian_op + # initialize_col_update_op + # During a col sweep: + # update_col_factors_op + + is_row_sweep_var = variables.Variable( + True, "is_row_sweep", + collections=[ops.GraphKeys.GLOBAL_VARIABLES]) + # The row sweep is determined by is_row_sweep_var (controlled by the + # sweep_hook) in TRAIN mode, and manually in EVAL mode. + is_row_sweep = (features[WALSMatrixFactorization.PROJECT_ROW] + if mode == model_fn.ModeKeys.EVAL else is_row_sweep_var) + + def update_row_factors(): + return model.update_row_factors(sp_input=input_rows, transpose_input=False) + def update_col_factors(): + return model.update_col_factors(sp_input=input_cols, transpose_input=True) + _, train_op, loss = control_flow_ops.cond( + is_row_sweep, update_row_factors, update_col_factors) + + row_prep_ops = [model.row_update_prep_gramian_op, + model.initialize_row_update_op] + col_prep_ops = [model.col_update_prep_gramian_op, + model.initialize_col_update_op] + cache_init_ops = [model.worker_init] + + sweep_hook = _SweepHook( + is_row_sweep_var, + train_op, + params["num_rows"], + params["num_cols"], + input_row_indices, + input_col_indices, + row_prep_ops, + col_prep_ops, + cache_init_ops, + ) + + # Prediction ops (only return predictions in INFER mode) + predictions = {} + if mode == model_fn.ModeKeys.INFER: + project_row = features[WALSMatrixFactorization.PROJECT_ROW] + projection_weights = features.get( + WALSMatrixFactorization.PROJECTION_WEIGHTS) + def get_row_projection(): + return model.project_row_factors( + sp_input=input_rows, + projection_weights=projection_weights, + transpose_input=False) + def get_col_projection(): + return model.project_col_factors( + sp_input=input_cols, + projection_weights=projection_weights, + transpose_input=True) + + predictions[WALSMatrixFactorization.PROJECTION_RESULT] = ( + control_flow_ops.cond( + project_row, get_row_projection, get_col_projection)) + + return model_fn.ModelFnOps( + mode=mode, + predictions=predictions, + loss=loss, + eval_metric_ops={}, + train_op=train_op, + training_hooks=[sweep_hook]) + + +class WALSMatrixFactorization(estimator.Estimator): + """An Estimator for Weighted Matrix Factorization, using the WALS method. + + WALS (Weighted Alternating Least Squares) is an algorithm for weighted matrix + factorization. It computes a low-rank approximation of a given sparse (n x m) + matrix A, by a product of two matrices, U * V^T, where U is a (n x k) matrix + and V is a (m x k) matrix. Here k is the rank of the approximation, also + called the embedding dimension. We refer to U as the row factors, and V as the + column factors. + See tensorflow/contrib/factorization/g3doc/wals.md for the precise problem + formulation. + + The training proceeds in sweeps: during a row_sweep, we fix V and solve for U. + During a column sweep, we fix U and solve for V. Each one of these problems is + an unconstrained quadratic minimization problem and can be solved exactly (it + can also be solved in mini-batches, since the solution decouples nicely). + The alternating between sweeps is achieved by using a hook during training, + which is responsible for keeping track of the sweeps and running preparation + ops at the beginning of each sweep. It also updates the global_step variable, + which keeps track of the number of batches processed since the beginning of + training. + The current implementation assumes that the training is run on a single + machine, and will fail if config.num_worker_replicas is not equal to one. + Training is done by calling self.fit(input_fn=input_fn), where input_fn + provides two queues: one for rows of the input matrix, and one for rows of the + transposed input matrix (i.e. columns of the original matrix). Note that + during a row sweep, only row batches are processed (ignoring column batches) + and vice-versa. + + For prediction, given a new set of input rows A' (e.g. new rows of the A + matrix), we compute a corresponding set of row factors U', such that U' * V^T + is a good approximation of A'. We call this operation a row projection. A + similar operation is defined for columns. + Projection is done by calling self.get_projections(input_fn=input_fn), where + input_fn satisfies the constraints given below. + + The input functions must satisfy the following constraints: Calling input_fn + must return a tuple (features, labels) where labels is None, and features is + a dict containing the following keys: + TRAIN: + - WALSMatrixFactorization.INPUT_ROWS: float32 SparseTensor (matrix). + Rows of the input matrix to process (or to project). + - WALSMatrixFactorization.INPUT_COLS: float32 SparseTensor (matrix). + Columns of the input matrix to process (or to project), transposed. + INFER: + - WALSMatrixFactorization.INPUT_ROWS: float32 SparseTensor (matrix). + Rows to project. + - WALSMatrixFactorization.INPUT_COLS: float32 SparseTensor (matrix). + Columns to project. + - WALSMatrixFactorization.PROJECT_ROW: Boolean Tensor. Whether to project + the rows or columns. + - WALSMatrixFactorization.PROJECTION_WEIGHTS (Optional): float32 Tensor + (vector). The weights to use in the projection. + EVAL: + - WALSMatrixFactorization.INPUT_ROWS: float32 SparseTensor (matrix). + Rows to project. + - WALSMatrixFactorization.INPUT_COLS: float32 SparseTensor (matrix). + Columns to project. + - WALSMatrixFactorization.PROJECT_ROW: Boolean Tensor. Whether to project + the rows or columns. + """ + # Keys to be used in model_fn + # Features keys + INPUT_ROWS = "input_rows" + INPUT_COLS = "input_cols" + PROJECT_ROW = "project_row" + PROJECTION_WEIGHTS = "projection_weights" + # Predictions key + PROJECTION_RESULT = "projection" + + def __init__(self, + num_rows, + num_cols, + embedding_dimension, + unobserved_weight=0.1, + regularization_coeff=None, + row_init="random", + col_init="random", + num_row_shards=1, + num_col_shards=1, + row_weights=1, + col_weights=1, + use_factors_weights_cache_for_training=True, + use_gramian_cache_for_training=True, + model_dir=None, + config=None): + """Creates a model for matrix factorization using the WALS method. + + Args: + num_rows: Total number of rows for input matrix. + num_cols: Total number of cols for input matrix. + embedding_dimension: Dimension to use for the factors. + unobserved_weight: Weight of the unobserved entries of matrix. + regularization_coeff: Weight of the L2 regularization term. Defaults to + None, in which case the problem is not regularized. + row_init: Initializer for row factor. Must be either: + - A tensor: The row factor matrix is initialized to this tensor, + - A numpy constant, + - "random": The rows are initialized using a normal distribution. + col_init: Initializer for column factor. See row_init. + num_row_shards: Number of shards to use for the row factors. + num_col_shards: Number of shards to use for the column factors. + row_weights: Must be in one of the following three formats: + - None: In this case, the weight of every entry is the unobserved_weight + and the problem simplifies to ALS. Note that, in this case, + col_weights must also be set to "None". + - List of lists of non-negative scalars, of the form + [[w_0, w_1, ...], [w_k, ... ], [...]], + where the number of inner lists equal to the number of row factor + shards and the elements in each inner list are the weights for the + rows of that shard. In this case, + w_ij = unonbserved_weight + row_weights[i] * col_weights[j]. + - A non-negative scalar: This value is used for all row weights. + Note that it is allowed to have row_weights as a list and col_weights + as a scalar, or vice-versa. + col_weights: See row_weights. + use_factors_weights_cache_for_training: Boolean, whether the factors and + weights will be cached on the workers before the updates start, during + training. Defaults to True. + Note that caching is disabled during prediction. + use_gramian_cache_for_training: Boolean, whether the Gramians will be + cached on the workers before the updates start, during training. + Defaults to True. Note that caching is disabled during prediction. + model_dir: The directory to save the model results and log files. + config: A Configuration object. See Estimator. + + Raises: + ValueError: If config.num_worker_replicas is strictly greater than one. + The current implementation only supports running on a single worker. + """ + # TODO(walidk): Support distributed training. + # TODO(walidk): Support power-law based weight computation. + # TODO(walidk): Add factor lookup by indices, with caching. + # TODO(walidk): Support caching during prediction. + + params = { + "num_rows": num_rows, + "num_cols": num_cols, + "embedding_dimension": embedding_dimension, + "unobserved_weight": unobserved_weight, + "regularization_coeff": regularization_coeff, + "row_init": row_init, + "col_init": col_init, + "num_row_shards": num_row_shards, + "num_col_shards": num_col_shards, + "row_weights": row_weights, + "col_weights": col_weights, + "use_factors_weights_cache_for_training": + use_factors_weights_cache_for_training, + "use_gramian_cache_for_training": use_gramian_cache_for_training + } + self._row_factors_names = ["row_factors_shard_%d" % i + for i in range(num_row_shards)] + self._col_factors_names = ["col_factors_shard_%d" % i + for i in range(num_col_shards)] + + super(WALSMatrixFactorization, self).__init__( + model_fn=_wals_factorization_model_function, + params=params, + model_dir=model_dir, + config=config) + + if self._config is not None and self._config.num_worker_replicas > 1: + raise ValueError("WALSMatrixFactorization must be run on a single worker " + "replica.") + + def get_row_factors(self): + """Returns the row factors of the model, loading them from checkpoint. + + Should only be run after training. + + Returns: + A list of the row factors of the model. + """ + return [self.get_variable_value(name) for name in self._row_factors_names] + + def get_col_factors(self): + """Returns the column factors of the model, loading them from checkpoint. + + Should only be run after training. + + Returns: + A list of the column factors of the model. + """ + return [self.get_variable_value(name) for name in self._col_factors_names] + + def get_projections(self, input_fn): + """Computes the projections of the rows or columns given in input_fn. + + Runs predict() with the given input_fn, and returns the results. Should only + be run after training. + + Args: + input_fn: Input function which specifies the rows or columns to project. + Returns: + A generator of the projected factors. + """ + return (result[WALSMatrixFactorization.PROJECTION_RESULT] + for result in self.predict(input_fn=input_fn)) diff --git a/tensorflow/contrib/factorization/python/ops/wals_test.py b/tensorflow/contrib/factorization/python/ops/wals_test.py index 2ae2d3ab058..3f5787ea871 100644 --- a/tensorflow/contrib/factorization/python/ops/wals_test.py +++ b/tensorflow/contrib/factorization/python/ops/wals_test.py @@ -18,16 +18,311 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function +import itertools +import json +import numpy as np + +from tensorflow.contrib.factorization.python.ops import factorization_ops_test_utils from tensorflow.contrib.factorization.python.ops import wals as wals_lib +from tensorflow.contrib.learn.python.learn import run_config +from tensorflow.contrib.learn.python.learn.estimators import run_config as run_config_lib +from tensorflow.python.framework import constant_op from tensorflow.python.framework import dtypes +from tensorflow.python.framework import sparse_tensor from tensorflow.python.ops import array_ops from tensorflow.python.ops import control_flow_ops +from tensorflow.python.ops import embedding_ops +from tensorflow.python.ops import math_ops +from tensorflow.python.ops import sparse_ops from tensorflow.python.ops import state_ops from tensorflow.python.ops import variables from tensorflow.python.platform import test +from tensorflow.python.training import input as input_lib from tensorflow.python.training import session_run_hook +class WALSMatrixFactorizationTest(test.TestCase): + INPUT_MATRIX = factorization_ops_test_utils.INPUT_MATRIX + + def np_array_to_sparse(self, np_array): + """Transforms an np.array to a tf.SparseTensor.""" + return factorization_ops_test_utils.np_matrix_to_tf_sparse(np_array) + + def calculate_loss(self): + """Calculates the loss of the current (trained) model.""" + current_rows = embedding_ops.embedding_lookup( + self._model.get_row_factors(), math_ops.range(self._num_rows), + partition_strategy='div') + current_cols = embedding_ops.embedding_lookup( + self._model.get_col_factors(), math_ops.range(self._num_cols), + partition_strategy='div') + row_wts = embedding_ops.embedding_lookup( + self._row_weights, math_ops.range(self._num_rows), + partition_strategy='div') + col_wts = embedding_ops.embedding_lookup( + self._col_weights, math_ops.range(self._num_cols), + partition_strategy='div') + sp_inputs = self.np_array_to_sparse(self.INPUT_MATRIX) + return factorization_ops_test_utils.calculate_loss( + sp_inputs, current_rows, current_cols, self._regularization_coeff, + self._unobserved_weight, row_wts, col_wts) + + # TODO(walidk): Replace with input_reader_utils functions once open sourced. + def remap_sparse_tensor_rows(self, sp_x, row_ids, shape): + """Remaps the row ids of a tf.SparseTensor.""" + old_row_ids, old_col_ids = array_ops.split( + value=sp_x.indices, num_or_size_splits=2, axis=1) + new_row_ids = array_ops.gather(row_ids, old_row_ids) + new_indices = array_ops.concat([new_row_ids, old_col_ids], 1) + return sparse_tensor.SparseTensor( + indices=new_indices, values=sp_x.values, dense_shape=shape) + + # TODO(walidk): Add an option to randomize inputs. + def input_fn(self, np_matrix, batch_size, project_row=None, + projection_weights=None, col_ids=None): + """Returns an input_fn that selects row and col batches from np_matrix.""" + def extract_features(row_batch, col_batch, shape): + row_ids = row_batch[0] + col_ids = col_batch[0] + rows = self.remap_sparse_tensor_rows(row_batch[1], row_ids, shape) + cols = self.remap_sparse_tensor_rows(col_batch[1], col_ids, shape) + features = { + wals_lib.WALSMatrixFactorization.INPUT_ROWS: rows, + wals_lib.WALSMatrixFactorization.INPUT_COLS: cols, + } + return features + + def _fn(): + num_rows = np.shape(np_matrix)[0] + num_cols = np.shape(np_matrix)[1] + row_ids = math_ops.range(num_rows, dtype=dtypes.int64) + col_ids = math_ops.range(num_cols, dtype=dtypes.int64) + sp_mat = self.np_array_to_sparse(np_matrix) + sp_mat_t = sparse_ops.sparse_transpose(sp_mat) + row_batch = input_lib.batch( + [row_ids, sp_mat], + batch_size=min(batch_size, num_rows), + capacity=10, + enqueue_many=True) + col_batch = input_lib.batch( + [col_ids, sp_mat_t], + batch_size=min(batch_size, num_cols), + capacity=10, + enqueue_many=True) + + features = extract_features(row_batch, col_batch, sp_mat.dense_shape) + if projection_weights is not None: + weights_batch = input_lib.batch( + projection_weights, + batch_size=batch_size, + capacity=10, + enqueue_many=True) + features[wals_lib.WALSMatrixFactorization.PROJECTION_WEIGHTS] = ( + weights_batch) + if project_row is not None: + features[wals_lib.WALSMatrixFactorization.PROJECT_ROW] = ( + constant_op.constant(project_row)) + + labels = None + return features, labels + + return _fn + + @property + def row_steps(self): + return np.ceil(self._num_rows / self.batch_size) + + @property + def col_steps(self): + return np.ceil(self._num_cols / self.batch_size) + + @property + def batch_size(self): + return 2 + + @property + def use_cache(self): + return False + + def setUp(self): + self._num_rows = 5 + self._num_cols = 7 + self._embedding_dimension = 3 + self._unobserved_weight = 0.1 + self._num_row_shards = 2 + self._num_col_shards = 3 + self._regularization_coeff = 0.01 + self._col_init = [ + # Shard 0. + [[-0.36444709, -0.39077035, -0.32528427], + [1.19056475, 0.07231052, 2.11834812], + [0.93468881, -0.71099287, 1.91826844]], + # Shard 1. + [[1.18160152, 1.52490723, -0.50015002], + [1.82574749, -0.57515913, -1.32810032]], + # Shard 2. + [[-0.15515432, -0.84675711, 0.13097958], + [-0.9246484, 0.69117504, 1.2036494]], + ] + self._row_weights = [[0.1, 0.2, 0.3], [0.4, 0.5]] + self._col_weights = [[0.1, 0.2, 0.3], [0.4, 0.5], [0.6, 0.7]] + + # Values of row and column factors after running one iteration or factor + # updates. + self._row_factors_0 = [[0.097689, -0.219293, -0.020780], + [0.50842, 0.64626, 0.22364], + [0.401159, -0.046558, -0.192854]] + self._row_factors_1 = [[1.20597, -0.48025, 0.35582], + [1.5564, 1.2528, 1.0528]] + self._col_factors_0 = [[2.4725, -1.2950, -1.9980], + [0.44625, 1.50771, 1.27118], + [1.39801, -2.10134, 0.73572]] + self._col_factors_1 = [[3.36509, -0.66595, -3.51208], + [0.57191, 1.59407, 1.33020]] + self._col_factors_2 = [[3.3459, -1.3341, -3.3008], + [0.57366, 1.83729, 1.26798]] + self._model = wals_lib.WALSMatrixFactorization( + self._num_rows, + self._num_cols, + self._embedding_dimension, + self._unobserved_weight, + col_init=self._col_init, + regularization_coeff=self._regularization_coeff, + num_row_shards=self._num_row_shards, + num_col_shards=self._num_col_shards, + row_weights=self._row_weights, + col_weights=self._col_weights, + use_factors_weights_cache_for_training=self.use_cache, + use_gramian_cache_for_training=self.use_cache) + + def test_fit(self): + # Row sweep. + input_fn = self.input_fn(np_matrix=self.INPUT_MATRIX, + batch_size=self.batch_size) + self._model.fit(input_fn=input_fn, steps=self.row_steps) + row_factors = self._model.get_row_factors() + self.assertAllClose(row_factors[0], self._row_factors_0, atol=1e-3) + self.assertAllClose(row_factors[1], self._row_factors_1, atol=1e-3) + + # Col sweep. + # Running fit a second time will resume training from the checkpoint. + input_fn = self.input_fn(np_matrix=self.INPUT_MATRIX, + batch_size=self.batch_size) + self._model.fit(input_fn=input_fn, steps=self.col_steps) + col_factors = self._model.get_col_factors() + self.assertAllClose(col_factors[0], self._col_factors_0, atol=1e-3) + self.assertAllClose(col_factors[1], self._col_factors_1, atol=1e-3) + self.assertAllClose(col_factors[2], self._col_factors_2, atol=1e-3) + + def test_predict(self): + input_fn = self.input_fn(np_matrix=self.INPUT_MATRIX, + batch_size=self.batch_size) + # Project rows 1 and 4 from the input matrix. + proj_input_fn = self.input_fn( + np_matrix=self.INPUT_MATRIX[[1, 4], :], + batch_size=2, + project_row=True, + projection_weights=[[0.2, 0.5]]) + + self._model.fit(input_fn=input_fn, steps=self.row_steps) + projections = self._model.get_projections(proj_input_fn) + projected_rows = list(itertools.islice(projections, 2)) + + self.assertAllClose( + projected_rows, + [self._row_factors_0[1], self._row_factors_1[1]], + atol=1e-3) + + # Project columns 5, 3, 1 from the input matrix. + proj_input_fn = self.input_fn( + np_matrix=self.INPUT_MATRIX[:, [5, 3, 1]], + batch_size=3, + project_row=False, + projection_weights=[[0.6, 0.4, 0.2]]) + + self._model.fit(input_fn=input_fn, steps=self.col_steps) + projections = self._model.get_projections(proj_input_fn) + projected_cols = list(itertools.islice(projections, 3)) + self.assertAllClose( + projected_cols, + [self._col_factors_2[0], self._col_factors_1[0], + self._col_factors_0[1]], + atol=1e-3) + + def test_eval(self): + # Do a row sweep then evaluate the model on row inputs. + # The evaluate function returns the loss of the projected rows, but since + # projection is idempotent, the eval loss must match the model loss. + input_fn = self.input_fn(np_matrix=self.INPUT_MATRIX, + batch_size=self.batch_size) + self._model.fit(input_fn=input_fn, steps=self.row_steps) + eval_input_fn_row = self.input_fn(np_matrix=self.INPUT_MATRIX, batch_size=1, + project_row=True) + loss = self._model.evaluate( + input_fn=eval_input_fn_row, steps=self._num_rows)['loss'] + + with self.test_session(): + true_loss = self.calculate_loss() + + self.assertNear( + loss, true_loss, err=.001, + msg="""After row update, eval loss = {}, does not match the true + loss = {}.""".format(loss, true_loss)) + + # Do a col sweep then evaluate the model on col inputs. + self._model.fit(input_fn=input_fn, steps=self.col_steps) + eval_input_fn_col = self.input_fn(np_matrix=self.INPUT_MATRIX, batch_size=1, + project_row=False) + loss = self._model.evaluate( + input_fn=eval_input_fn_col, steps=self._num_cols)['loss'] + + with self.test_session(): + true_loss = self.calculate_loss() + + self.assertNear( + loss, true_loss, err=.001, + msg="""After row update, eval loss = {}, does not match the true + loss = {}.""".format(loss, true_loss)) + + +class WALSMatrixFactorizationTestCached(WALSMatrixFactorizationTest): + + @property + def use_cache(self): + return True + + +class WALSMatrixFactorizationTestFullBatch(WALSMatrixFactorizationTest): + + @property + def batch_size(self): + return 100 + + +class WALSMatrixFactorizationUnsupportedTest(test.TestCase): + + def setUp(self): + pass + + def testDistributedWALSUnsupported(self): + tf_config = { + 'cluster': { + run_config_lib.TaskType.PS: ['host1:1', 'host2:2'], + run_config_lib.TaskType.WORKER: ['host3:3', 'host4:4'] + }, + 'task': { + 'type': run_config_lib.TaskType.WORKER, + 'index': 1 + } + } + with test.mock.patch.dict('os.environ', + {'TF_CONFIG': json.dumps(tf_config)}): + config = run_config.RunConfig() + self.assertEqual(config.num_worker_replicas, 2) + with self.assertRaises(ValueError): + self._model = wals_lib.WALSMatrixFactorization(1, 1, 1, config=config) + + class SweepHookTest(test.TestCase): def setUp(self): @@ -45,7 +340,7 @@ class SweepHookTest(test.TestCase): def run_hook_with_indices(self, sweep_hook, row_indices, col_indices): with self.test_session() as sess: - # Before run + # Before run. run_context = session_run_hook.SessionRunContext( original_args=None, session=sess) sess_run_args = sweep_hook.before_run(run_context) @@ -53,11 +348,11 @@ class SweepHookTest(test.TestCase): self._input_row_indices_ph: row_indices, self._input_col_indices_ph: col_indices } - # Run + # Run. run_results = sess.run(sess_run_args.fetches, feed_dict=feed_dict) run_values = session_run_hook.SessionRunValues( results=run_results, options=None, run_metadata=None) - # After run + # After run. sweep_hook.after_run(run_context, run_values) def test_row_sweep(self): @@ -74,9 +369,9 @@ class SweepHookTest(test.TestCase): self._col_prep_ops, self._init_ops) - # Initialize variables + # Initialize variables. sess.run([variables.global_variables_initializer()]) - # Row sweep + # Row sweep. self.run_hook_with_indices(sweep_hook, [], []) self.assertTrue(sess.run(self._init_done), msg='init ops not run by the sweep_hook') diff --git a/tensorflow/contrib/layers/python/layers/embedding_ops.py b/tensorflow/contrib/layers/python/layers/embedding_ops.py index b1a7f7ee59a..f231ee38597 100644 --- a/tensorflow/contrib/layers/python/layers/embedding_ops.py +++ b/tensorflow/contrib/layers/python/layers/embedding_ops.py @@ -351,7 +351,7 @@ def _sampled_scattered_embedding_lookup( # No need to validate the indices since we have checked the params # dimensions and we know the largest id. result = embedding_ops.embedding_lookup( - params, ids, partition_strategy="div", validate_indices=False) + params, ids, partition_strategy="div") return array_ops.reshape(result, array_ops.concat([values_shape, [dimension]], 0)) @@ -681,19 +681,17 @@ def embedding_lookup_sparse_with_distributed_aggregation( return embeddings -def _do_gather(params, ids, validate_indices=True, name=None): +def _do_gather(params, ids, name=None): """Deals with doing gather differently for resource variables.""" if isinstance(params, resource_variable_ops.ResourceVariable): return params.sparse_read(ids, name=name) - return array_ops.gather( - params, ids, name=name, validate_indices=validate_indices) + return array_ops.gather(params, ids, name=name) def _embedding_lookup_with_distributed_aggregation(params, ids, partition_strategy="mod", name=None, - validate_indices=True, max_norm=None, weights=None, idx=None, @@ -724,8 +722,7 @@ def _embedding_lookup_with_distributed_aggregation(params, params = ops.convert_n_to_tensor_or_indexed_slices(params, name="params") if np == 1: with ops.colocate_with(params[0]): - ret = maybe_normalize( - _do_gather(params[0], ids, validate_indices=validate_indices)) + ret = maybe_normalize(_do_gather(params[0], ids)) ignore_weights = weights is None if not ignore_weights: if weights.dtype != ret.dtype: @@ -803,9 +800,7 @@ def _embedding_lookup_with_distributed_aggregation(params, partitioned_result = [] for p in xrange(np): with ops.colocate_with(params[p]): - partitioned_result.append( - _do_gather( - params[p], gather_ids[p], validate_indices=validate_indices)) + partitioned_result.append(_do_gather(params[p], gather_ids[p])) ignore_weights = weights is None if not ignore_weights: diff --git a/tensorflow/contrib/linalg/python/kernel_tests/linear_operator_test.py b/tensorflow/contrib/linalg/python/kernel_tests/linear_operator_test.py index 8b419700db0..c5bfc6e1fd5 100644 --- a/tensorflow/contrib/linalg/python/kernel_tests/linear_operator_test.py +++ b/tensorflow/contrib/linalg/python/kernel_tests/linear_operator_test.py @@ -78,8 +78,9 @@ class LinearOperatorApplyOnly(linalg.LinearOperator): def _shape_tensor(self): return array_ops.shape(self._matrix) - def _apply(self, x, adjoint=False): - return math_ops.matmul(self._matrix, x, adjoint_a=adjoint) + def _apply(self, x, adjoint=False, adjoint_arg=False): + return math_ops.matmul( + self._matrix, x, adjoint_a=adjoint, adjoint_b=adjoint_arg) class LinearOperatorTest(test.TestCase): diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator.py b/tensorflow/contrib/linalg/python/ops/linear_operator.py index 5052a0b15cf..454411d93cf 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator.py @@ -30,7 +30,6 @@ __all__ = ["LinearOperator"] # TODO(langmore) Use matrix_solve_ls for singular or non-square matrices. -# TODO(langmore) Add adjoint_x arg to apply, solve. class LinearOperator(object): """Base class defining a [batch of] linear operator[s]. @@ -490,16 +489,18 @@ class LinearOperator(object): "Expected argument to have dtype %s. Found: %s in tensor %s" % (self.dtype, arg.dtype, arg)) - def _apply(self, x, adjoint=False): + def _apply(self, x, adjoint=False, adjoint_arg=False): raise NotImplementedError("_apply is not implemented.") - def apply(self, x, adjoint=False, name="apply"): + def apply(self, x, adjoint=False, adjoint_arg=False, name="apply"): """Transform `x` with left multiplication: `x --> Ax`. Args: x: `Tensor` with compatible shape and same `dtype` as `self`. See class docstring for definition of compatibility. - adjoint: Python `bool`. If `True`, left multiply by the adjoint. + adjoint: Python `bool`. If `True`, left multiply by the adjoint: `A^H x`. + adjoint_arg: Python `bool`. If `True`, compute `A x^H` where `x^H` is + the hermitian transpose (transposition and complex conjugation). name: A name for this `Op. Returns: @@ -508,11 +509,12 @@ class LinearOperator(object): with self._name_scope(name, values=[x]): x = ops.convert_to_tensor(x, name="x") self._check_input_dtype(x) - if adjoint: - self.shape[-2].assert_is_compatible_with(x.get_shape()[-2]) - else: - self.shape[-1].assert_is_compatible_with(x.get_shape()[-2]) - return self._apply(x, adjoint=adjoint) + + self_dim = -2 if adjoint else -1 + arg_dim = -1 if adjoint_arg else -2 + self.shape[self_dim].assert_is_compatible_with(x.get_shape()[arg_dim]) + + return self._apply(x, adjoint=adjoint, adjoint_arg=adjoint_arg) def _determinant(self): raise NotImplementedError("_det is not implemented.") @@ -558,13 +560,13 @@ class LinearOperator(object): with self._name_scope(name): return self._log_abs_determinant() - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): # Since this is an exact solve method for all rhs, this will only be # available for non-singular (batch) operators, in particular the operator # must be square. raise NotImplementedError("_solve is not implemented.") - def solve(self, rhs, adjoint=False, name="solve"): + def solve(self, rhs, adjoint=False, adjoint_arg=False, name="solve"): """Solve `R` (batch) systems of equations exactly: `A X = rhs`. Examples: @@ -588,7 +590,9 @@ class LinearOperator(object): rhs: `Tensor` with same `dtype` as this operator and compatible shape. See class docstring for definition of compatibility. adjoint: Python `bool`. If `True`, solve the system involving the adjoint - of this `LinearOperator`. + of this `LinearOperator`: `A^H X = rhs`. + adjoint_arg: Python `bool`. If `True`, solve `A X = rhs^H` where `rhs^H` + is the hermitian transpose (transposition and complex conjugation). name: A name scope to use for ops added by this method. Returns: @@ -608,11 +612,12 @@ class LinearOperator(object): with self._name_scope(name, values=[rhs]): rhs = ops.convert_to_tensor(rhs, name="rhs") self._check_input_dtype(rhs) - if adjoint: - self.shape[-1].assert_is_compatible_with(rhs.get_shape()[-2]) - else: - self.shape[-2].assert_is_compatible_with(rhs.get_shape()[-2]) - return self._solve(rhs, adjoint=adjoint) + + self_dim = -1 if adjoint else -2 + arg_dim = -1 if adjoint_arg else -2 + self.shape[self_dim].assert_is_compatible_with(rhs.get_shape()[arg_dim]) + + return self._solve(rhs, adjoint=adjoint, adjoint_arg=adjoint_arg) def _to_dense(self): """Generic and often inefficient implementation. Override often.""" diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_composition.py b/tensorflow/contrib/linalg/python/ops/linear_operator_composition.py index 9f3a4d230f7..b1557769b22 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_composition.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_composition.py @@ -225,7 +225,7 @@ class LinearOperatorComposition(linear_operator.LinearOperator): return array_ops.concat((batch_shape, matrix_shape), 0) - def _apply(self, x, adjoint=False): + def _apply(self, x, adjoint=False, adjoint_arg=False): # If self.operators = [A, B], and not adjoint, then # apply_order_list = [B, A]. # As a result, we return A.apply(B.apply(x)) @@ -234,8 +234,9 @@ class LinearOperatorComposition(linear_operator.LinearOperator): else: apply_order_list = list(reversed(self.operators)) - result = x - for operator in apply_order_list: + result = apply_order_list[0].apply( + x, adjoint=adjoint, adjoint_arg=adjoint_arg) + for operator in apply_order_list[1:]: result = operator.apply(result, adjoint=adjoint) return result @@ -251,7 +252,7 @@ class LinearOperatorComposition(linear_operator.LinearOperator): result += operator.log_abs_determinant() return result - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): # TODO(langmore) Implement solve using solve_ls if some intermediate # operator maps to a high dimensional space. # In that case, an exact solve may still be possible. @@ -264,8 +265,9 @@ class LinearOperatorComposition(linear_operator.LinearOperator): else: solve_order_list = self.operators - solution = rhs - for operator in solve_order_list: + solution = solve_order_list[0].solve( + rhs, adjoint=adjoint, adjoint_arg=adjoint_arg) + for operator in solve_order_list[1:]: solution = operator.solve(solution, adjoint=adjoint) return solution diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_diag.py b/tensorflow/contrib/linalg/python/ops/linear_operator_diag.py index 0cd7e72a8b6..97e52d08a43 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_diag.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_diag.py @@ -206,8 +206,9 @@ class LinearOperatorDiag(linear_operator.LinearOperator): "This diagonal operator contained non-zero imaginary values. " " Thus it was not self-adjoint.")) - def _apply(self, x, adjoint=False): + def _apply(self, x, adjoint=False, adjoint_arg=False): diag_term = math_ops.conj(self._diag) if adjoint else self._diag + x = linear_operator_util.matrix_adjoint(x) if adjoint_arg else x diag_mat = array_ops.expand_dims(diag_term, -1) return diag_mat * x @@ -218,8 +219,9 @@ class LinearOperatorDiag(linear_operator.LinearOperator): return math_ops.reduce_sum( math_ops.log(math_ops.abs(self._diag)), reduction_indices=[-1]) - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): diag_term = math_ops.conj(self._diag) if adjoint else self._diag + rhs = linear_operator_util.matrix_adjoint(rhs) if adjoint_arg else rhs inv_diag_mat = array_ops.expand_dims(1. / diag_term, -1) return rhs * inv_diag_mat diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_full_matrix.py b/tensorflow/contrib/linalg/python/ops/linear_operator_full_matrix.py index f9349682215..64ab5614577 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_full_matrix.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_full_matrix.py @@ -19,6 +19,7 @@ from __future__ import division from __future__ import print_function from tensorflow.contrib.linalg.python.ops import linear_operator +from tensorflow.contrib.linalg.python.ops import linear_operator_util from tensorflow.python.framework import dtypes from tensorflow.python.framework import ops from tensorflow.python.ops import array_ops @@ -172,8 +173,9 @@ class LinearOperatorFullMatrix(linear_operator.LinearOperator): def _shape_tensor(self): return array_ops.shape(self._matrix) - def _apply(self, x, adjoint=False): - return math_ops.matmul(self._matrix, x, adjoint_a=adjoint) + def _apply(self, x, adjoint=False, adjoint_arg=False): + return math_ops.matmul( + self._matrix, x, adjoint_a=adjoint, adjoint_b=adjoint_arg) def _determinant(self): if self._is_spd: @@ -187,7 +189,8 @@ class LinearOperatorFullMatrix(linear_operator.LinearOperator): abs_det = math_ops.abs(self.determinant()) return math_ops.log(abs_det) - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): + rhs = linear_operator_util.matrix_adjoint(rhs) if adjoint_arg else rhs if self._is_spd: return linalg_ops.cholesky_solve(self._chol, rhs) return linalg_ops.matrix_solve(self._matrix, rhs, adjoint=adjoint) diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_identity.py b/tensorflow/contrib/linalg/python/ops/linear_operator_identity.py index 60d8b2cdc03..845bf25192e 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_identity.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_identity.py @@ -329,8 +329,9 @@ class LinearOperatorIdentity(BaseLinearOperatorIdentity): zeros = array_ops.zeros(shape=special_shape, dtype=self.dtype) return x + zeros - def _apply(self, x, adjoint=False): + def _apply(self, x, adjoint=False, adjoint_arg=False): # Note that adjoint has no effect since this matrix is self-adjoint. + x = linear_operator_util.matrix_adjoint(x) if adjoint_arg else x if self._assert_proper_shapes: aps = linear_operator_util.assert_compatible_matrix_dimensions( self, x) @@ -343,8 +344,8 @@ class LinearOperatorIdentity(BaseLinearOperatorIdentity): def _log_abs_determinant(self): return array_ops.zeros(shape=self.batch_shape_tensor(), dtype=self.dtype) - def _solve(self, rhs, adjoint=False): - return self._apply(rhs) + def _solve(self, rhs, adjoint=False, adjoint_arg=False): + return self._apply(rhs, adjoint_arg=adjoint_arg) def _diag_part(self): return self._ones_diag() @@ -616,7 +617,8 @@ class LinearOperatorScaledIdentity(BaseLinearOperatorIdentity): imag_multiplier, message="LinearOperator was not self-adjoint") - def _apply(self, x, adjoint=False): + def _apply(self, x, adjoint=False, adjoint_arg=False): + x = linear_operator_util.matrix_adjoint(x) if adjoint_arg else x if adjoint: matrix = self._multiplier_matrix_conj else: @@ -634,7 +636,8 @@ class LinearOperatorScaledIdentity(BaseLinearOperatorIdentity): return self._num_rows_cast_to_real_dtype * math_ops.log( self._abs_multiplier) - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): + rhs = linear_operator_util.matrix_adjoint(rhs) if adjoint_arg else rhs if adjoint: matrix = self._multiplier_matrix_conj else: diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_test_util.py b/tensorflow/contrib/linalg/python/ops/linear_operator_test_util.py index 0b7fc3da396..c8bc62eeef9 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_test_util.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_test_util.py @@ -23,6 +23,7 @@ import numpy as np import six from tensorflow.contrib.framework.python.framework import tensor_util as contrib_tensor_util +from tensorflow.contrib.linalg.python.ops import linear_operator_util from tensorflow.python.framework import dtypes from tensorflow.python.framework import ops from tensorflow.python.framework import random_seed @@ -213,18 +214,26 @@ class LinearOperatorDerivedClassTest(test.TestCase): for shape in self._shapes_to_test: for dtype in self._dtypes_to_test: for adjoint in False, True: - with self.test_session(graph=ops.Graph()) as sess: - sess.graph.seed = random_seed.DEFAULT_GRAPH_SEED - operator, mat, feed_dict = self._operator_and_mat_and_feed_dict( - shape, dtype, use_placeholder=use_placeholder) - x = self._make_x(operator, adjoint=adjoint) - op_apply = operator.apply(x, adjoint=adjoint) - mat_apply = math_ops.matmul(mat, x, adjoint_a=adjoint) - if not use_placeholder: - self.assertAllEqual(op_apply.get_shape(), mat_apply.get_shape()) - op_apply_v, mat_apply_v = sess.run([op_apply, mat_apply], - feed_dict=feed_dict) - self.assertAC(op_apply_v, mat_apply_v) + for adjoint_arg in False, True: + with self.test_session(graph=ops.Graph()) as sess: + sess.graph.seed = random_seed.DEFAULT_GRAPH_SEED + operator, mat, feed_dict = self._operator_and_mat_and_feed_dict( + shape, dtype, use_placeholder=use_placeholder) + x = self._make_x(operator, adjoint=adjoint) + # If adjoint_arg, compute A X^H^H = A X. + if adjoint_arg: + op_apply = operator.apply( + linear_operator_util.matrix_adjoint(x), + adjoint=adjoint, adjoint_arg=adjoint_arg) + else: + op_apply = operator.apply(x, adjoint=adjoint) + mat_apply = math_ops.matmul(mat, x, adjoint_a=adjoint) + if not use_placeholder: + self.assertAllEqual( + op_apply.get_shape(), mat_apply.get_shape()) + op_apply_v, mat_apply_v = sess.run([op_apply, mat_apply], + feed_dict=feed_dict) + self.assertAC(op_apply_v, mat_apply_v) def test_solve(self): self._skip_if_tests_to_skip_contains("solve") @@ -232,18 +241,27 @@ class LinearOperatorDerivedClassTest(test.TestCase): for shape in self._shapes_to_test: for dtype in self._dtypes_to_test: for adjoint in False, True: - with self.test_session(graph=ops.Graph()) as sess: - sess.graph.seed = random_seed.DEFAULT_GRAPH_SEED - operator, mat, feed_dict = self._operator_and_mat_and_feed_dict( - shape, dtype, use_placeholder=use_placeholder) - rhs = self._make_rhs(operator, adjoint=adjoint) - op_solve = operator.solve(rhs, adjoint=adjoint) - mat_solve = linalg_ops.matrix_solve(mat, rhs, adjoint=adjoint) - if not use_placeholder: - self.assertAllEqual(op_solve.get_shape(), mat_solve.get_shape()) - op_solve_v, mat_solve_v = sess.run([op_solve, mat_solve], - feed_dict=feed_dict) - self.assertAC(op_solve_v, mat_solve_v) + for adjoint_arg in False, True: + with self.test_session(graph=ops.Graph()) as sess: + sess.graph.seed = random_seed.DEFAULT_GRAPH_SEED + operator, mat, feed_dict = self._operator_and_mat_and_feed_dict( + shape, dtype, use_placeholder=use_placeholder) + rhs = self._make_rhs(operator, adjoint=adjoint) + # If adjoint_arg, solve A X = (rhs^H)^H = rhs. + if adjoint_arg: + op_solve = operator.solve( + linear_operator_util.matrix_adjoint(rhs), + adjoint=adjoint, adjoint_arg=adjoint_arg) + else: + op_solve = operator.solve( + rhs, adjoint=adjoint, adjoint_arg=adjoint_arg) + mat_solve = linalg_ops.matrix_solve(mat, rhs, adjoint=adjoint) + if not use_placeholder: + self.assertAllEqual( + op_solve.get_shape(), mat_solve.get_shape()) + op_solve_v, mat_solve_v = sess.run([op_solve, mat_solve], + feed_dict=feed_dict) + self.assertAC(op_solve_v, mat_solve_v) def test_add_to_tensor(self): self._skip_if_tests_to_skip_contains("add_to_tensor") diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_tril.py b/tensorflow/contrib/linalg/python/ops/linear_operator_tril.py index 38461ce8a22..756e26cc130 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_tril.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_tril.py @@ -173,8 +173,9 @@ class LinearOperatorTriL(linear_operator.LinearOperator): self._diag, message="Singular operator: Diagonal contained zero values.") - def _apply(self, x, adjoint=False): - return math_ops.matmul(self._tril, x, adjoint_a=adjoint) + def _apply(self, x, adjoint=False, adjoint_arg=False): + return math_ops.matmul( + self._tril, x, adjoint_a=adjoint, adjoint_b=adjoint_arg) def _determinant(self): return math_ops.reduce_prod(self._diag, reduction_indices=[-1]) @@ -183,7 +184,8 @@ class LinearOperatorTriL(linear_operator.LinearOperator): return math_ops.reduce_sum( math_ops.log(math_ops.abs(self._diag)), reduction_indices=[-1]) - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): + rhs = linear_operator_util.matrix_adjoint(rhs) if adjoint_arg else rhs return linalg_ops.matrix_triangular_solve( self._tril, rhs, lower=True, adjoint=adjoint) diff --git a/tensorflow/contrib/linalg/python/ops/linear_operator_udvh_update.py b/tensorflow/contrib/linalg/python/ops/linear_operator_udvh_update.py index 89b5c1ab1b9..4ca77ab1471 100644 --- a/tensorflow/contrib/linalg/python/ops/linear_operator_udvh_update.py +++ b/tensorflow/contrib/linalg/python/ops/linear_operator_udvh_update.py @@ -348,21 +348,21 @@ class LinearOperatorUDVHUpdate(linear_operator.LinearOperator): return array_ops.concat( [batch_shape, self.base_operator.shape_tensor()[-2:]], axis=0) - def _apply(self, x, adjoint=False): + def _apply(self, x, adjoint=False, adjoint_arg=False): u = self.u v = self.v l = self.base_operator d = self.diag_operator - leading_term = l.apply(x, adjoint=adjoint) + leading_term = l.apply(x, adjoint=adjoint, adjoint_arg=adjoint_arg) if adjoint: - uh_x = math_ops.matmul(u, x, adjoint_a=True) + uh_x = math_ops.matmul(u, x, adjoint_a=True, adjoint_b=adjoint_arg) d_uh_x = d.apply(uh_x, adjoint=adjoint) v_d_uh_x = math_ops.matmul(v, d_uh_x) return leading_term + v_d_uh_x else: - vh_x = math_ops.matmul(v, x, adjoint_a=True) + vh_x = math_ops.matmul(v, x, adjoint_a=True, adjoint_b=adjoint_arg) d_vh_x = d.apply(vh_x, adjoint=adjoint) u_d_vh_x = math_ops.matmul(u, d_vh_x) return leading_term + u_d_vh_x @@ -398,7 +398,7 @@ class LinearOperatorUDVHUpdate(linear_operator.LinearOperator): return log_abs_det_c + log_abs_det_d + log_abs_det_l - def _solve(self, rhs, adjoint=False): + def _solve(self, rhs, adjoint=False, adjoint_arg=False): if self.base_operator.is_non_singular is False: raise ValueError( "Solve not implemented unless this is a perturbation of a " @@ -421,7 +421,7 @@ class LinearOperatorUDVHUpdate(linear_operator.LinearOperator): u = self.u # L^{-1} rhs - linv_rhs = l.solve(rhs, adjoint=adjoint) + linv_rhs = l.solve(rhs, adjoint=adjoint, adjoint_arg=adjoint_arg) # V^H L^{-1} rhs vh_linv_rhs = math_ops.matmul(v, linv_rhs, adjoint_a=True) # C^{-1} V^H L^{-1} rhs diff --git a/tensorflow/contrib/training/BUILD b/tensorflow/contrib/training/BUILD index a781f0cbfc8..a8d8bda060d 100644 --- a/tensorflow/contrib/training/BUILD +++ b/tensorflow/contrib/training/BUILD @@ -24,6 +24,7 @@ py_library( "python/training/failure_tolerator.py", "python/training/feeder.py", "python/training/hparam.py", + "python/training/python_input.py", "python/training/resample.py", "python/training/sampling_ops.py", "python/training/sequence_queueing_state_saver.py", @@ -46,8 +47,10 @@ py_library( "//tensorflow/python:logging_ops", "//tensorflow/python:math_ops", "//tensorflow/python:ops", + "//tensorflow/python:parsing_ops", "//tensorflow/python:platform", "//tensorflow/python:random_ops", + "//tensorflow/python:script_ops", "//tensorflow/python:state_ops", "//tensorflow/python:string_ops", "//tensorflow/python:summary", @@ -243,6 +246,26 @@ py_test( ], ) +py_test( + name = "python_input_test", + size = "medium", + srcs = ["python/training/python_input_test.py"], + srcs_version = "PY2AND3", + tags = ["manual"], + deps = [ + ":training_py", + "//tensorflow/python:array_ops", + "//tensorflow/python:client_testlib", + "//tensorflow/python:data_flow_ops", + "//tensorflow/python:framework_for_generated_wrappers", + "//tensorflow/python:framework_test_lib", + "//tensorflow/python:math_ops", + "//tensorflow/python:parsing_ops", + "//tensorflow/python:training", + "//third_party/py/numpy", + ], +) + py_test( name = "evaluation_test", size = "small", diff --git a/tensorflow/contrib/training/__init__.py b/tensorflow/contrib/training/__init__.py index be097fd9fca..b8d4629ac47 100644 --- a/tensorflow/contrib/training/__init__.py +++ b/tensorflow/contrib/training/__init__.py @@ -35,6 +35,7 @@ See @{$python/contrib.training} guide. @@HParams @@HParamDef @@parse_values +@@python_input """ from __future__ import absolute_import @@ -54,6 +55,7 @@ from tensorflow.contrib.training.python.training.evaluation import wait_for_new_ from tensorflow.contrib.training.python.training.failure_tolerator import * from tensorflow.contrib.training.python.training.feeder import * from tensorflow.contrib.training.python.training.hparam import * +from tensorflow.contrib.training.python.training.python_input import python_input from tensorflow.contrib.training.python.training.resample import * from tensorflow.contrib.training.python.training.sampling_ops import * from tensorflow.contrib.training.python.training.sequence_queueing_state_saver import * diff --git a/tensorflow/contrib/training/python/training/bucket_ops.py b/tensorflow/contrib/training/python/training/bucket_ops.py index 7c50f43b792..7e293da5511 100644 --- a/tensorflow/contrib/training/python/training/bucket_ops.py +++ b/tensorflow/contrib/training/python/training/bucket_ops.py @@ -251,10 +251,16 @@ def bucket(tensors, else: which_dequeue = lambda q: q.dequeue_many + def make_list(t): + if isinstance(t, (list, tuple)): + return t + else: + return [t] + enqueues_to_top = [ top_queue.enqueue( - [constant_op.constant(i)] + which_dequeue(q)( - bs, name="read_bucket_%d" % i), + [constant_op.constant(i)] + make_list(which_dequeue(q)( + bs, name="read_bucket_%d" % i)), name="enqueue_from_bucket_%d" % i) for i, (q, bs) in enumerate(zip(bucket_queues, batch_size)) ] @@ -282,6 +288,8 @@ def bucket(tensors, dequeued = top_queue.dequeue(name="dequeue_top") which_bucket_dequeued = dequeued[0] dequeued = dequeued[1:] + if len(dequeued) == 1: + dequeued = dequeued[0] dequeued = _restore_sparse_tensors(dequeued, sparse_info) return (which_bucket_dequeued, _as_original_type(tensors, dequeued)) diff --git a/tensorflow/contrib/training/python/training/python_input.py b/tensorflow/contrib/training/python/training/python_input.py new file mode 100644 index 00000000000..7f5420a98a1 --- /dev/null +++ b/tensorflow/contrib/training/python/training/python_input.py @@ -0,0 +1,178 @@ +# Copyright 2017 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""Operations for asynchronously reading data from python into queues. +""" +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import threading + +import numpy as np + +from tensorflow.python.framework import ops +from tensorflow.python.framework import tensor_shape +from tensorflow.python.ops import parsing_ops +from tensorflow.python.ops import script_ops + + +def _process_yielded_dict(feature_values, keys, features, dtypes, shapes): + """Read feature_values from the generator and emit a proper output dict.""" + if not isinstance(feature_values, dict): + raise TypeError("generator must return dict, saw: %s" % feature_values) + + processed_values = {} + for pk in keys: + if feature_values.get(pk, None) is not None: + processed_values[pk] = np.asarray( + feature_values[pk], dtype=dtypes[pk].as_numpy_dtype) + check_shape = tensor_shape.TensorShape(processed_values[pk].shape) + if not shapes[pk].is_compatible_with(check_shape): + raise ValueError( + "Feature '%s' has shape %s that is incompatible with declared " + "shape: %s" % (pk, shapes[pk], check_shape)) + continue + if isinstance(features[pk], parsing_ops.FixedLenFeature): + if features[pk].default_value is not None: + processed_values[pk] = np.asarray( + features[pk].default_value, dtype=dtypes[pk].as_numpy_dtype) + elif isinstance(features[pk], parsing_ops.FixedLenSequenceFeature): + processed_values[pk] = np.empty( + [0] + features[pk].shape.aslist(), dtype=dtypes[pk].as_numpy_dtype) + else: + raise ValueError( + "Expected generator to return key '%s' with non-empty value" % pk) + + return processed_values + + +def python_input(generator, features, name=None): + """Easily feed data from a python generator into TensorFlow queues. + + Example usage: + + ```python + def generator(): + for i in range(3): + yield {"value": i} + + features = { + "value": tf.FixedLenFeature(shape=[], dtype=dtypes.int32) + } + + tensor_dict = tf.contrib.training.python_input(generator, features) + batched_dict = tf.train.batch( + tensor_dict, batch_size=2, allow_smaller_final_batch=True) + + s = tf.Session() + tf.train.start_queue_runners() + + batch1 = s.run(batched_dict) # returns {"value": np.array([0, 1])} + batch2 = s.run(batched_dict) # returns {"value": np.array([2])} + s.run(batched_dict) # error: Queue is closed (generator finished at i==3) + ``` + + Args: + generator: A python generator that takes no arguments, and yields dicts + containing a single minibatch entry one at a time. + features: A python `dict` mapping keys expected from the generator to + instances of `tf.FixedLenFeature`, or `tf.FixedLenSequenceFeature`. + name: (Optional) A name for the operations. + + Returns: + A dict mapping keys of the `features` dict to `Tensor` objects. + These `Tensor` objects are outputs of a queue that is fed by `generator`. + + Raises: + TypeError: If generator is not callable or features is not a dict. + TypeError: If any of features' values are not a Feature object. + NotImplementedError: If any of features' values are instances of + `SparseFeature` or `VarLenFeature` (these are not currently supported). + ValueError: If any FixedLenSequenceFeatures contain a default value + (this field is not supported). + ValueError: if any FixedLenSequenceFeatures have allow_missing=False + (this field is not supported). + """ + if not callable(generator): + raise TypeError("generator must be callable, saw: %s" % generator) + if not isinstance(features, dict): + raise TypeError("features must be a dict, saw: %s" + % type(features).__name__) + + with ops.name_scope(name, "python_input"): + shapes = {} + dtypes = {} + for k, v in features.items(): + if isinstance(v, parsing_ops.FixedLenFeature): + if v.default_value is not None: + value = ops.convert_to_tensor(v.default_value, dtype=v.dtype, name=k) + shapes[k] = value.shape + dtypes[k] = value.dtype + else: + tensor_shape.TensorShape(v.shape).assert_is_fully_defined() + shapes[k] = tensor_shape.TensorShape(v.shape) + dtypes[k] = v.dtype + elif isinstance(v, parsing_ops.VarLenFeature): + raise NotImplementedError("VarLenFeature not supported") + elif isinstance(v, parsing_ops.SparseFeature): + raise NotImplementedError("SparseFeature not supported") + elif isinstance(v, parsing_ops.FixedLenSequenceFeature): + if v.default_value is not None: + raise ValueError("FixedLenSequenceFeature with default value not " + "supported") + if not v.allow_missing: + raise ValueError("FixedLenSequenceFeature with allow_missing=False " + "not supported") + tensor_shape.TensorShape(v.shape).assert_is_fully_defined() + shapes[k] = tensor_shape.TensorShape([None]).concatenate(v.shape) + dtypes[k] = v.dtype + else: + raise TypeError( + "Expected value for features key '%s' to be one of " + "FixedLenFeature, VarLenFeature, SparseFeature, or " + "FixedLenSequenceFeature. Got: %s" % (k, v)) + + keys = list(shapes.keys()) + dtypes_list = [dtypes[pk] for pk in keys] + + counter = [0] + lock = threading.Lock() + iterator = iter(generator()) + + def generator_iter(): + """Iterate through generator output and return np.arrays to py_func.""" + with lock: + try: + feature_values = next(iterator) + counter[0] += 1 + except StopIteration as e: + raise StopIteration("Iteration finished. Processed %d entries (%s)" + % (counter[0], e)) + + processed_dict = _process_yielded_dict( + feature_values, keys, features, dtypes, shapes) + return [processed_dict[pk] for pk in keys] + + generator_pyfunc_values = script_ops.py_func( + generator_iter, inp=[], Tout=dtypes_list, stateful=True) + + pyfunc_input = {k: v for (k, v) in zip(keys, generator_pyfunc_values)} + for k, v in shapes.items(): + pyfunc_input[k].set_shape(v) + + return pyfunc_input + + +__all__ = ["python_input"] diff --git a/tensorflow/contrib/training/python/training/python_input_test.py b/tensorflow/contrib/training/python/training/python_input_test.py new file mode 100644 index 00000000000..afd0f38c2cd --- /dev/null +++ b/tensorflow/contrib/training/python/training/python_input_test.py @@ -0,0 +1,191 @@ +# Copyright 2016 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""Tests for tf.contrib.training.python_input.""" +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import numpy as np +from tensorflow.contrib.training.python.training import bucket_ops +from tensorflow.contrib.training.python.training import python_input +from tensorflow.python.framework import dtypes +from tensorflow.python.framework import errors +from tensorflow.python.ops import parsing_ops +from tensorflow.python.platform import test +from tensorflow.python.training import coordinator +from tensorflow.python.training import input as core_input +from tensorflow.python.training import queue_runner_impl + + +class PythonInputTest(test.TestCase): + + def testGenerator(self): + def simple_generator(): + for i in range(2): + yield {"value": i, "ignored": 3} + + simple_features = { + "value": parsing_ops.FixedLenFeature(shape=[], dtype=dtypes.int32) + } + tensors = python_input.python_input(simple_generator, simple_features) + self.assertEqual(["value"], tensors.keys()) + self.assertEqual(dtypes.int32, tensors["value"].dtype) + self.assertEqual((), tensors["value"].shape) + + with self.test_session() as sess: + self.assertEqual({"value": 0}, sess.run(tensors)) + self.assertEqual({"value": 1}, sess.run(tensors)) + with self.assertRaisesOpError("Iteration finished"): + sess.run(tensors) + + def testInvalidGenerator(self): + generator1 = lambda: iter([{"value": "a"}]) + int_features = { + "value": parsing_ops.FixedLenFeature(shape=[], dtype=dtypes.int32) + } + tensors1 = python_input.python_input(generator1, int_features) + + with self.test_session() as sess: + with self.assertRaisesOpError("invalid literal"): + # Can't convert a string to an integer + sess.run(tensors1) + + generator2 = lambda: iter([None]) + tensors2 = python_input.python_input(generator2, int_features) + + with self.test_session() as sess: + with self.assertRaisesOpError("generator must return dict"): + sess.run(tensors2) + + generator3 = lambda: iter([{"value": [1, 2]}]) + tensors3 = python_input.python_input(generator3, int_features) + + with self.test_session() as sess: + with self.assertRaisesOpError("incompatible with declared shape"): + sess.run(tensors3) + + def testGeneratorWorksWithBatching(self): + def simple_generator(): + for i in range(5): + yield {"value": i, "ignored": 3} + + simple_features = { + "value": parsing_ops.FixedLenFeature(shape=[], dtype=dtypes.int32) + } + tensors = python_input.python_input(simple_generator, simple_features) + + # Request batches of size 4 at a time, the final batch may be smaller. + batched_tensors = core_input.batch(tensors, batch_size=4, + allow_smaller_final_batch=True) + + self.assertEqual(["value"], batched_tensors.keys()) + self.assertEqual(dtypes.int32, batched_tensors["value"].dtype) + self.assertEqual([None], batched_tensors["value"].shape.as_list()) + + with self.test_session() as sess: + # The generator emits 5 items total. The first 4 are returned in + # the first session run; the final one is returned in the + # second. This works because allow_smaller_final_batch=True. + coord = coordinator.Coordinator() + threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord) + r1 = sess.run(batched_tensors) + r2 = sess.run(batched_tensors) + self.assertAllEqual([0, 1, 2, 3], r1["value"]) + self.assertEqual([4], r2["value"]) + with self.assertRaisesOpError("Iteration finished"): + sess.run(tensors) + coord.request_stop() + for thread in threads: + thread.join() + + def testGeneratorWorksWithManyBatchingThreads(self): + def simple_generator(): + for i in range(5000): + yield {"value": i, "ignored": 3} + + simple_features = { + "value": parsing_ops.FixedLenFeature(shape=[], dtype=dtypes.int32) + } + tensors = python_input.python_input(simple_generator, simple_features) + + # Request batches of size 20 at a time, the final batch may be smaller. + _, batched_tensors = bucket_ops.bucket( + tensors, which_bucket=tensors["value"] % 5, + batch_size=20, num_buckets=5, num_threads=7, capacity=17, + allow_smaller_final_batch=True) + + self.assertEqual(["value"], batched_tensors.keys()) + self.assertEqual(dtypes.int32, batched_tensors["value"].dtype) + self.assertEqual([None], batched_tensors["value"].shape.as_list()) + + with self.test_session() as sess: + # The generator emits 5 items total. The first 4 are returned in + # the first session run; the final one is returned in the + # second. This works because allow_smaller_final_batch=True. + coord = coordinator.Coordinator() + threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord) + results = [] + while True: + try: + r = sess.run(batched_tensors) + results.extend(r["value"].tolist()) + except errors.OutOfRangeError: + break + coord.request_stop() + for thread in threads: + thread.join() + self.assertEqual(sorted(results), + list(range(5000))) + + def testVaryingFieldsInGenerator(self): + def simple_generator(): + for i in range(2): + yield {"value": i, + "seqlen_value": np.ones((i, 1))} + + simple_features = { + "value": parsing_ops.FixedLenFeature(shape=[], dtype=dtypes.int32), + "seqlen_value": parsing_ops.FixedLenSequenceFeature( + shape=[1], dtype=dtypes.float32, allow_missing=True), + "empty_value": parsing_ops.FixedLenFeature( + default_value=[-1, -2], dtype=dtypes.int32, shape=[2]) + } + tensors = python_input.python_input(simple_generator, simple_features) + self.assertEqual( + set(["value", "seqlen_value", "empty_value"]), set(tensors.keys())) + self.assertEqual(dtypes.int32, tensors["value"].dtype) + self.assertEqual((), tensors["value"].shape) + self.assertEqual(dtypes.float32, tensors["seqlen_value"].dtype) + self.assertEqual([None, 1], tensors["seqlen_value"].shape.as_list()) + self.assertEqual(dtypes.int32, tensors["empty_value"].dtype) + self.assertEqual([2], tensors["empty_value"].shape) + + with self.test_session() as sess: + r1 = sess.run(tensors) + self.assertAllEqual(0, r1["value"]) + self.assertAllEqual(np.ones((0, 1)), r1["seqlen_value"]) + self.assertAllEqual([-1, -2], r1["empty_value"]) + + r2 = sess.run(tensors) + self.assertAllEqual(1, r2["value"]) + self.assertAllEqual([[1]], r2["seqlen_value"]) + self.assertAllEqual([-1, -2], r2["empty_value"]) + + with self.assertRaisesOpError("Iteration finished"): + sess.run(tensors) + + +if __name__ == "__main__": + test.main() diff --git a/tensorflow/core/grappler/BUILD b/tensorflow/core/grappler/BUILD index 5d74d3d3b17..476a9ac52a4 100644 --- a/tensorflow/core/grappler/BUILD +++ b/tensorflow/core/grappler/BUILD @@ -19,6 +19,7 @@ filegroup( srcs = [ "devices.cc", "devices.h", + "grappler_item.cc", "grappler_item.h", "utils.cc", "utils.h", diff --git a/tensorflow/core/grappler/optimizers/BUILD b/tensorflow/core/grappler/optimizers/BUILD index 2ea150ce188..64d5815bf78 100644 --- a/tensorflow/core/grappler/optimizers/BUILD +++ b/tensorflow/core/grappler/optimizers/BUILD @@ -17,6 +17,7 @@ filegroup( srcs = glob( [ "*_optimizer.*", + "auto_parallel.*", "constant_folding.*", "model_pruner.*", "graph_rewriter.*", @@ -210,6 +211,7 @@ cc_library( ], visibility = ["//visibility:public"], deps = [ + ":auto_parallel", ":constant_folding", ":graph_optimizer", ":layout_optimizer", diff --git a/tensorflow/core/grappler/optimizers/auto_parallel.h b/tensorflow/core/grappler/optimizers/auto_parallel.h index cac0db2c236..ad90bbe0289 100644 --- a/tensorflow/core/grappler/optimizers/auto_parallel.h +++ b/tensorflow/core/grappler/optimizers/auto_parallel.h @@ -25,7 +25,9 @@ namespace grappler { // Automatically parallelize a graph by splitting in the batch dimension. class AutoParallel : public GraphOptimizer { public: - AutoParallel(int num_replicas) : num_replicas_(num_replicas) {} + AutoParallel(int num_replicas) : num_replicas_(num_replicas) { + CHECK(num_replicas_ >= 2); + } ~AutoParallel() override {} string name() const override { return "autoparallel"; }; diff --git a/tensorflow/core/grappler/optimizers/meta_optimizer.cc b/tensorflow/core/grappler/optimizers/meta_optimizer.cc index 0fe9359b753..2ea5adffebc 100644 --- a/tensorflow/core/grappler/optimizers/meta_optimizer.cc +++ b/tensorflow/core/grappler/optimizers/meta_optimizer.cc @@ -15,6 +15,7 @@ limitations under the License. #include "tensorflow/core/grappler/optimizers/meta_optimizer.h" #include "tensorflow/core/framework/versions.pb.h" +#include "tensorflow/core/grappler/optimizers/auto_parallel.h" #include "tensorflow/core/grappler/optimizers/constant_folding.h" #include "tensorflow/core/grappler/optimizers/graph_optimizer.h" #include "tensorflow/core/grappler/optimizers/layout_optimizer.h" @@ -41,6 +42,10 @@ std::unique_ptr MetaOptimizer::NewOptimizer( if (optimizer == "memory") { graph_optimizer.reset(new MemoryOptimizer()); } + if (optimizer == "autoparallel") { + graph_optimizer.reset( + new AutoParallel(cfg_.auto_parallel().num_replicas())); + } return graph_optimizer; } @@ -63,11 +68,15 @@ Status MetaOptimizer::Optimize(Cluster* cluster, const GrapplerItem& item, optimizers.push_back( std::unique_ptr(new MemoryOptimizer())); } + if (cfg_.auto_parallel().enable()) { + optimizers.push_back(std::unique_ptr( + new AutoParallel(cfg_.auto_parallel().num_replicas()))); + } } else { - std::set avaliable_optimizers = {"pruning", "constfold", "layout", - "memory"}; + std::set available_optimizers = {"pruning", "constfold", "layout", + "memory", "autoparallel"}; for (const auto& optimizer : cfg_.optimizers()) { - if (avaliable_optimizers.find(optimizer) != avaliable_optimizers.end()) { + if (available_optimizers.find(optimizer) != available_optimizers.end()) { optimizers.push_back(NewOptimizer(optimizer)); } } @@ -102,7 +111,8 @@ void MetaOptimizer::Feedback(Cluster* cluster, const GrapplerItem& item, } bool MetaOptimizerEnabled(const RewriterConfig& cfg) { - return cfg.optimize_tensor_layout(); + return cfg.optimize_tensor_layout() || cfg.constant_folding() || + cfg.auto_parallel().enable() || !cfg.optimizers().empty(); } Status RunMetaOptimizer(const GrapplerItem& item, const RewriterConfig& cfg, diff --git a/tensorflow/core/kernels/aggregate_ops.cc b/tensorflow/core/kernels/aggregate_ops.cc index cbc0537b454..0aa65729de2 100644 --- a/tensorflow/core/kernels/aggregate_ops.cc +++ b/tensorflow/core/kernels/aggregate_ops.cc @@ -161,9 +161,11 @@ TF_CALL_NUMBER_TYPES(REGISTER_ADDN_CPU); #undef REGISTER_ADDN_CPU #if GOOGLE_CUDA -REGISTER_ADDN(Eigen::half, GPU); -REGISTER_ADDN(float, GPU); -REGISTER_ADDN(double, GPU); +#define REGISTER_ADDN_GPU(type) REGISTER_ADDN(type, GPU) +TF_CALL_GPU_NUMBER_TYPES(REGISTER_ADDN_GPU); +TF_CALL_complex64(REGISTER_ADDN_GPU); +TF_CALL_complex128(REGISTER_ADDN_GPU); +#undef REGISTER_ADDN_GPU // A special GPU kernel for int32. // TODO(b/25387198): Also enable int32 in device memory. This kernel diff --git a/tensorflow/core/kernels/aggregate_ops_gpu.cu.cc b/tensorflow/core/kernels/aggregate_ops_gpu.cu.cc index 51393787acb..3f449be7544 100644 --- a/tensorflow/core/kernels/aggregate_ops_gpu.cu.cc +++ b/tensorflow/core/kernels/aggregate_ops_gpu.cu.cc @@ -154,6 +154,8 @@ struct Add9Functor { template struct functor::Add9Functor; TF_CALL_GPU_NUMBER_TYPES(REGISTER_FUNCTORS); +TF_CALL_complex64(REGISTER_FUNCTORS); +TF_CALL_complex128(REGISTER_FUNCTORS); #undef REGISTER_FUNCTORS diff --git a/tensorflow/core/kernels/quantization_utils_test.cc b/tensorflow/core/kernels/quantization_utils_test.cc index 84566047405..c7dbc0e5d11 100644 --- a/tensorflow/core/kernels/quantization_utils_test.cc +++ b/tensorflow/core/kernels/quantization_utils_test.cc @@ -355,6 +355,24 @@ TEST_F(QuantizationUtilsTest, AvoidBias) { const int back_to_int = FloatToQuantized(as_float, 0.0f, 2.0f); EXPECT_EQ(i, back_to_int); } + + // All perfectly representable floats should survive quantization, even + // if we pick a range where min is not itself perfectly representable. + const float min = -0.1375f; + const float max = 1.1385f; + const float step_size = (max - min) / 255.0f; + const float tolerance = step_size / 1000.0f; + // This is the smallest perfectly representable float in the range. + float first_float = ceil(min / step_size) * step_size; + // TODO(ahentz): The current version always incur a small error, which we + // need to account for. We should fix QuantizedToFloat<> to remove this bias. + const float expected_error = first_float - min; + ASSERT_GT(expected_error, tolerance); + for (float f = first_float; f <= max; f += step_size) { + const int as_int = FloatToQuantized(f, min, max); + const float back_to_float = QuantizedToFloat(as_int, min, max); + EXPECT_NEAR(f, back_to_float + expected_error, tolerance); + } } TEST_F(QuantizationUtilsTest, RequantizeInNewRange) { diff --git a/tensorflow/core/ops/array_ops.cc b/tensorflow/core/ops/array_ops.cc index e2e07a4bf19..8761c400b11 100644 --- a/tensorflow/core/ops/array_ops.cc +++ b/tensorflow/core/ops/array_ops.cc @@ -488,7 +488,7 @@ REGISTER_OP("SplitV") ShapeHandle output_shape; const Tensor* size_splits = c->input_tensor(1); if (rank == InferenceContext::kUnknownRank) { - // If the rank of input tensor is unknown, then return unkown shapes. + // If the rank of input tensor is unknown, then return unknown shapes. output_shape = c->UnknownShape(); for (int i = 0; i < num_outputs; ++i) { c->set_output(i, output_shape); @@ -497,7 +497,7 @@ REGISTER_OP("SplitV") // Throw error if input is a scalar. return errors::InvalidArgument("Can't split scalars"); } else if (size_splits == nullptr || !c->ValueKnown(split_dimension)) { - // If split dimension or tensor containing the split sizes is unkown, + // If split dimension or tensor containing the split sizes is unknown, // then return unknown shapes of same rank as input. output_shape = c->UnknownShapeOfRank(rank); for (int i = 0; i < num_outputs; ++i) { @@ -1328,8 +1328,8 @@ this operation will permute `params` accordingly. `validate_indices`: DEPRECATED. If this operation is assigned to CPU, values in `indices` are always validated to be within range. If assigned to GPU, -out-of-bound indices result in unspecified behavior (currently the result is -`0`, but this may become an error in the future). +out-of-bound indices result in safe but unspecified behavior, which may include +raising an error.
diff --git a/tensorflow/core/protobuf/rewriter_config.proto b/tensorflow/core/protobuf/rewriter_config.proto index 63821cb55ef..753edba4b84 100644 --- a/tensorflow/core/protobuf/rewriter_config.proto +++ b/tensorflow/core/protobuf/rewriter_config.proto @@ -6,6 +6,11 @@ option java_outer_classname = "RewriterConfigProtos"; option java_multiple_files = true; option java_package = "org.tensorflow.framework"; +message AutoParallelOptions { + bool enable = 1; + int32 num_replicas = 2; +} + message RewriterConfig { bool optimize_tensor_layout = 1; bool disable_model_pruning = 2; @@ -19,6 +24,8 @@ message RewriterConfig { } MemOptType memory_optimization = 4; + AutoParallelOptions auto_parallel = 5; + // If non-empty, will use this as an alternative way to specify a list of // optimizations to turn on and the order of the optimizations. repeated string optimizers = 100; diff --git a/tensorflow/docs_src/tutorials/using_gpu.md b/tensorflow/docs_src/tutorials/using_gpu.md index d64cdafdefb..dcec62d2749 100644 --- a/tensorflow/docs_src/tutorials/using_gpu.md +++ b/tensorflow/docs_src/tutorials/using_gpu.md @@ -57,14 +57,17 @@ have the same device assignment. with tf.device('/cpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') - c = tf.matmul(a, b) +c = tf.matmul(a, b) # Creates a session with log_device_placement set to True. sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) # Runs the op. print(sess.run(c)) ``` -You will see that now `a` and `b` are assigned to `cpu:0`. +You will see that now `a` and `b` are assigned to `cpu:0`. Since a device was +not explicitly specified for the `MatMul` operation, the TensorFlow runtime will +choose one based on the operation and available devices (`gpu:0` in this +example) and automatically copy tensors between devices if required. ``` Device mapping: diff --git a/tensorflow/python/__init__.py b/tensorflow/python/__init__.py index 3663d8af7ae..5f598364273 100644 --- a/tensorflow/python/__init__.py +++ b/tensorflow/python/__init__.py @@ -132,6 +132,7 @@ from tensorflow.python.ops import tensor_array_ops # documentation, or remove. _allowed_symbols = [ 'AttrValue', + 'AutoParallelOptions', 'ConfigProto', 'DeviceSpec', 'Event', diff --git a/tensorflow/python/debug/cli/curses_ui.py b/tensorflow/python/debug/cli/curses_ui.py index b7549b406b6..d665627a93a 100644 --- a/tensorflow/python/debug/cli/curses_ui.py +++ b/tensorflow/python/debug/cli/curses_ui.py @@ -124,7 +124,7 @@ class ScrollBar(object): raise ValueError("Insufficient height for ScrollBar (%d)" % (self._max_y - self._min_y + 1)) - def _block_y(self): + def _block_y(self, screen_coord_sys=False): """Get the 0-based y coordinate of the scroll block. This y coordinate takes into account the presence of the UP and DN buttons @@ -132,9 +132,13 @@ class ScrollBar(object): location, the return value will be 1; at the bottom location, the return value will be self._scroll_bar_height - 2. + Args: + screen_coord_sys: (`bool`) whether the return value will be in the + screen coordinate system. + Returns: (int) 0-based y coordinate of the scroll block, in the ScrollBar - coordinate system, i.e., not the screen coordinate system. For example, + coordinate system by default. For example, when scroll position is at the top, this return value will be 1 (not 0, because of the presence of the UP button). When scroll position is at the bottom, this return value will be self._scroll_bar_height - 2 @@ -142,8 +146,10 @@ class ScrollBar(object): button). """ - return int(float(self._scroll_position) / (self._output_num_rows - 1) * - (self._scroll_bar_height - 3)) + 1 + rel_block_y = int( + float(self._scroll_position) / (self._output_num_rows - 1) * + (self._scroll_bar_height - 3)) + 1 + return rel_block_y + self._min_y if screen_coord_sys else rel_block_y def layout(self): """Get the RichTextLines layout of the scroll bar. @@ -192,9 +198,11 @@ class ScrollBar(object): return _SCROLL_UP_A_LINE elif mouse_y == self._max_y: return _SCROLL_DOWN_A_LINE - elif mouse_y > self._block_y() and mouse_y < self._max_y: + elif (mouse_y > self._block_y(screen_coord_sys=True) and + mouse_y < self._max_y): return _SCROLL_DOWN - elif mouse_y < self._block_y() and mouse_y > self._min_y: + elif (mouse_y < self._block_y(screen_coord_sys=True) and + mouse_y > self._min_y): return _SCROLL_UP else: return None @@ -505,7 +513,7 @@ class CursesUI(base_ui.BaseUI): def get_help(self): return self._command_handler_registry.get_help() - def _screen_create_command_textbox(self, existing_command): + def _screen_create_command_textbox(self, existing_command=None): """Create command textbox on screen. Args: @@ -839,6 +847,7 @@ class CursesUI(base_ui.BaseUI): else: command = self._fetch_hyperlink_command(mouse_x, mouse_y) if command: + self._screen_create_command_textbox() exit_token = self._dispatch_command(command) if exit_token is not None: raise debugger_cli_common.CommandLineExit(exit_token=exit_token) @@ -898,13 +907,14 @@ class CursesUI(base_ui.BaseUI): """Automatically key in a command to the command Textbox. Args: - command: The command, as a string. + command: The command, as a string or None. erase_existing: (bool) whether existing text (if any) is to be erased first. """ if erase_existing: self._erase_existing_command() + command = command or "" for c in command: self._command_textbox.do_command(ord(c)) @@ -1227,9 +1237,9 @@ class CursesUI(base_ui.BaseUI): self._scroll_bar = ScrollBar( self._max_x - 2, - 2, + 3, self._max_x - 1, - self._output_num_rows, + self._output_num_rows + 1, self._output_pad_row, self._output_pad_height - self._output_pad_screen_height) diff --git a/tensorflow/python/debug/cli/curses_ui_test.py b/tensorflow/python/debug/cli/curses_ui_test.py index 8219f47ef3a..15e1356d292 100644 --- a/tensorflow/python/debug/cli/curses_ui_test.py +++ b/tensorflow/python/debug/cli/curses_ui_test.py @@ -113,7 +113,7 @@ class MockCursesUI(curses_ui.CursesUI): def _screen_create_command_window(self): pass - def _screen_create_command_textbox(self, existing_command): + def _screen_create_command_textbox(self, existing_command=None): """Override to insert observer of existing commands. Used in testing of history navigation and tab completion. @@ -1646,6 +1646,25 @@ class ScrollBarTest(test_util.TensorFlowTestCase): scroll_bar.get_click_command(7)) self.assertIsNone(scroll_bar.get_click_command(8)) + def testClickCommandsAreCorrectForScrollBarNotAtZeroMinY(self): + scroll_bar = curses_ui.ScrollBar(0, 5, 1, 12, 10, 20) + self.assertIsNone(scroll_bar.get_click_command(0)) + self.assertIsNone(scroll_bar.get_click_command(4)) + self.assertEqual(curses_ui._SCROLL_UP_A_LINE, + scroll_bar.get_click_command(5)) + self.assertEqual(curses_ui._SCROLL_UP, + scroll_bar.get_click_command(6)) + self.assertEqual(curses_ui._SCROLL_UP, + scroll_bar.get_click_command(7)) + self.assertIsNone(scroll_bar.get_click_command(8)) + self.assertEqual(curses_ui._SCROLL_DOWN, + scroll_bar.get_click_command(10)) + self.assertEqual(curses_ui._SCROLL_DOWN, + scroll_bar.get_click_command(11)) + self.assertEqual(curses_ui._SCROLL_DOWN_A_LINE, + scroll_bar.get_click_command(12)) + self.assertIsNone(scroll_bar.get_click_command(13)) + if __name__ == "__main__": googletest.main() diff --git a/tensorflow/python/estimator/model_fn.py b/tensorflow/python/estimator/model_fn.py index ee5999c78bc..a9d044fcfec 100644 --- a/tensorflow/python/estimator/model_fn.py +++ b/tensorflow/python/estimator/model_fn.py @@ -51,6 +51,7 @@ class ModeKeys(object): class MetricKeys(object): """Metric key strings.""" LOSS = 'loss' + AVERAGE_LOSS = 'average_loss' class EstimatorSpec( diff --git a/tensorflow/python/kernel_tests/BUILD b/tensorflow/python/kernel_tests/BUILD index f21402652fa..ad455babbcd 100644 --- a/tensorflow/python/kernel_tests/BUILD +++ b/tensorflow/python/kernel_tests/BUILD @@ -941,6 +941,19 @@ tf_py_test( ], ) +cuda_py_test( + name = "aggregate_ops_test", + size = "small", + srcs = ["aggregate_ops_test.py"], + additional_deps = [ + "//third_party/py/numpy", + "//tensorflow/python:array_ops", + "//tensorflow/python:client_testlib", + "//tensorflow/python:framework_for_generated_wrappers", + "//tensorflow/python:math_ops", + ], +) + cuda_py_test( name = "argmax_op_test", size = "small", diff --git a/tensorflow/python/kernel_tests/aggregate_ops_test.py b/tensorflow/python/kernel_tests/aggregate_ops_test.py new file mode 100644 index 00000000000..f56917f7e9b --- /dev/null +++ b/tensorflow/python/kernel_tests/aggregate_ops_test.py @@ -0,0 +1,79 @@ +# Copyright 2017 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""Tests for aggregate_ops.""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import numpy as np + +from tensorflow.python.framework import dtypes +from tensorflow.python.ops import array_ops +from tensorflow.python.ops import math_ops +from tensorflow.python.platform import test + + +class AddNTest(test.TestCase): + # AddN special-cases adding the first M inputs to make (N - M) divisible by 8, + # after which it adds the remaining (N - M) tensors 8 at a time in a loop. + # Test N in [1, 10] so we check each special-case from 1 to 9 and one + # iteration of the loop. + _MAX_N = 10 + + def _supported_types(self): + if test.is_gpu_available(): + return [dtypes.float16, dtypes.float32, dtypes.float64, dtypes.complex64, + dtypes.complex128] + return [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64, + dtypes.float16, dtypes.float32, dtypes.float64, dtypes.complex64, + dtypes.complex128] + + def _buildData(self, shape, dtype): + data = np.random.randn(*shape).astype(dtype.as_numpy_dtype) + # For complex types, add an index-dependent imaginary component so we can + # tell we got the right value. + if dtype.is_complex: + return data + 10j * data + return data + + def testAddN(self): + np.random.seed(12345) + with self.test_session(use_gpu=True) as sess: + for dtype in self._supported_types(): + for count in range(1, self._MAX_N + 1): + data = [self._buildData((2, 2), dtype) for _ in range(count)] + actual = sess.run(math_ops.add_n(data)) + expected = np.sum(np.vstack( + [np.expand_dims(d, 0) for d in data]), axis=0) + tol = 5e-3 if dtype == dtypes.float16 else 5e-7 + self.assertAllClose(expected, actual, rtol=tol, atol=tol) + + def testUnknownShapes(self): + np.random.seed(12345) + with self.test_session(use_gpu=True) as sess: + for dtype in self._supported_types(): + data = self._buildData((2, 2), dtype) + for count in range(1, self._MAX_N + 1): + data_ph = array_ops.placeholder(dtype=dtype) + actual = sess.run(math_ops.add_n([data_ph] * count), {data_ph: data}) + expected = np.sum(np.vstack([np.expand_dims(data, 0)] * count), + axis=0) + tol = 5e-3 if dtype == dtypes.float16 else 5e-7 + self.assertAllClose(expected, actual, rtol=tol, atol=tol) + + +if __name__ == "__main__": + test.main() diff --git a/tensorflow/python/ops/control_flow_ops.py b/tensorflow/python/ops/control_flow_ops.py index c4a27009c3c..dea2180069a 100644 --- a/tensorflow/python/ops/control_flow_ops.py +++ b/tensorflow/python/ops/control_flow_ops.py @@ -50,8 +50,6 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function -import collections - import six from six.moves import xrange # pylint: disable=redefined-builtin @@ -426,10 +424,11 @@ def merge(inputs, name=None): # pylint: enable=protected-access -def _convert_tensorarrays_to_flows(tensors_or_tensor_arrays): - return [ta.flow if isinstance(ta, tensor_array_ops.TensorArray) - else ta - for ta in tensors_or_tensor_arrays] +def _convert_tensorarray_to_flow(tensor_or_tensor_array): + if isinstance(tensor_or_tensor_array, tensor_array_ops.TensorArray): + return tensor_or_tensor_array.flow + else: + return tensor_or_tensor_array def _make_tensor_array(ta, t_or_flow): @@ -1637,63 +1636,77 @@ class CondContext(ControlFlowContext): real_val = external_val return real_val + def _BuildCondTensor(self, v): + if isinstance(v, ops.Operation): + # Use pivot as the proxy for this op. + return with_dependencies([v], self._pivot) + elif isinstance(v, (ops.IndexedSlices, sparse_tensor.SparseTensor)): + values = self._ProcessOutputTensor(v.values) + indices = self._ProcessOutputTensor(v.indices) + if isinstance(v, ops.IndexedSlices): + dense_shape = v.dense_shape + if dense_shape is not None: + dense_shape = self._ProcessOutputTensor(dense_shape) + return ops.IndexedSlices(values, indices, dense_shape) + else: + dense_shape = self._ProcessOutputTensor(v.dense_shape) + return sparse_tensor.SparseTensor(indices, values, dense_shape) + else: + v = nest.map_structure(_convert_tensorarray_to_flow, v) + return self._ProcessOutputTensor(ops.convert_to_tensor(v)) + def BuildCondBranch(self, fn): """Add the subgraph defined by fn() to the graph.""" - r = fn() - original_r = r - result = [] - if r is not None: - if not isinstance(r, list) and not isinstance(r, _basetuple): - r = [r] - original_r = [original_r] - r = _convert_tensorarrays_to_flows(r) - for v in r: - real_v = v - if isinstance(v, ops.Operation): - # Use pivot as the proxy for this op. - real_v = with_dependencies([v], self._pivot) - else: - if isinstance(v, (ops.IndexedSlices, sparse_tensor.SparseTensor)): - values = self._ProcessOutputTensor(v.values) - indices = self._ProcessOutputTensor(v.indices) - if isinstance(v, ops.IndexedSlices): - dense_shape = v.dense_shape - if dense_shape is not None: - dense_shape = self._ProcessOutputTensor(dense_shape) - real_v = ops.IndexedSlices(values, indices, dense_shape) - else: - dense_shape = self._ProcessOutputTensor(v.dense_shape) - real_v = sparse_tensor.SparseTensor(indices, values, dense_shape) - else: - real_v = self._ProcessOutputTensor(v) - result.append(real_v) - return original_r, result + original_result = fn() + if original_result is None: + return None, None + + result = nest.map_structure(self._BuildCondTensor, original_result) + if not isinstance(result, (list, _basetuple)): + result = [result] + return original_result, result -def cond(pred, fn1, fn2, name=None): - """Return either fn1() or fn2() based on the boolean predicate `pred`. +def _UnpackIfSingleton(res): + if isinstance(res, (list, _basetuple)) and len(res) == 1: + return res[0] + else: + return res + + +def cond(pred, fn1, fn2, strict=False, name=None): + """Return either `fn1()` or `fn2()` based on the boolean predicate `pred`. `fn1` and `fn2` both return lists of output tensors. `fn1` and `fn2` must have the same non-zero number and type of outputs. Note that the conditional execution applies only to the operations defined in - fn1 and fn2. Consider the following simple program: + `fn1` and `fn2`. Consider the following simple program: ```python z = tf.multiply(a, b) result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) ``` - If x < y, the `tf.add` operation will be executed and `tf.square` - operation will not be executed. Since z is needed for at least one - branch of the cond, the `tf.multiply` operation is always executed, unconditionally. + If `x < y`, the `tf.add` operation will be executed and `tf.square` + operation will not be executed. Since `z` is needed for at least one + branch of the `cond`, the `tf.multiply` operation is always executed, + unconditionally. Although this behavior is consistent with the dataflow model of TensorFlow, it has occasionally surprised some users who expected a lazier semantics. + `tf.cond` supports nested structures as implemented in + `tensorflow.python.util.nest`. Both `fn1` and `fn2` must return the same + (possibly nested) value structure of lists, tuples, and/or named tuples. + Singleton lists and tuples form the only exceptions to this: when returned by + `fn1` and/or `fn2`, they are implicitly unpacked to single values. This + behavior is disabled by passing `strict=True`. + Args: pred: A scalar determining whether to return the result of `fn1` or `fn2`. fn1: The callable to be performed if pred is true. fn2: The callable to be performed if pred is false. + strict: A boolean that enables/disables 'strict' mode; see above. name: Optional name prefix for the returned tensors. Returns: @@ -1738,23 +1751,43 @@ def cond(pred, fn1, fn2, name=None): # Build the graph for the true branch in a new context. context_t = CondContext(pred, pivot_1, branch=1) context_t.Enter() - orig_res, res_t = context_t.BuildCondBranch(fn1) + orig_res_t, res_t = context_t.BuildCondBranch(fn1) + if orig_res_t is None: + raise ValueError("fn1 must have a return value.") context_t.ExitResult(res_t) context_t.Exit() # Build the graph for the false branch in a new context. context_f = CondContext(pred, pivot_2, branch=0) context_f.Enter() - _, res_f = context_f.BuildCondBranch(fn2) + orig_res_f, res_f = context_f.BuildCondBranch(fn2) + if orig_res_f is None: + raise ValueError("fn2 must have a return value.") context_f.ExitResult(res_f) context_f.Exit() + if not strict: + orig_res_t = _UnpackIfSingleton(orig_res_t) + orig_res_f = _UnpackIfSingleton(orig_res_f) + + # Check that the return values of the two branches have the same structure. + try: + nest.assert_same_structure(orig_res_t, orig_res_f) + except TypeError as e: + raise TypeError( + "Incompatible return types of fn1 and fn2: {}".format(e)) + except ValueError as e: + raise ValueError( + "Incompatible return values of fn1 and fn2: {}".format(e)) + # Add the final merge to the graph. - if len(res_t) != len(res_f): - raise ValueError("fn1 and fn2 must return the same number of results.") if not res_t: raise ValueError("fn1 and fn2 must return at least one result.") - for x, y in zip(res_t, res_f): + + res_t_flat = nest.flatten(res_t) + res_f_flat = nest.flatten(res_f) + + for x, y in zip(res_t_flat, res_f_flat): assert ((isinstance(x, ops.IndexedSlices) and isinstance(y, ops.IndexedSlices)) or (isinstance(x, sparse_tensor.SparseTensor) and @@ -1765,14 +1798,20 @@ def cond(pred, fn1, fn2, name=None): if val_x.dtype.base_dtype != val_y.dtype.base_dtype: raise ValueError("Outputs of fn1 and fn2 must have the same type: " "%s, %s" % (val_x.dtype.name, val_y.dtype.name)) - merges = [merge([x[0], x[1]])[0] for x in zip(res_f, res_t)] - merges = _convert_flows_to_tensorarrays(orig_res, merges) + + merges = [merge(pair)[0] for pair in zip(res_f_flat, res_t_flat)] + merges = _convert_flows_to_tensorarrays(nest.flatten(orig_res_t), merges) # Add to collections ops.add_to_collection(ops.GraphKeys.COND_CONTEXT, context_t) ops.add_to_collection(ops.GraphKeys.COND_CONTEXT, context_f) - return merges[0] if len(merges) == 1 else merges + merges = nest.pack_sequence_as(structure=orig_res_t, flat_sequence=merges) + + # Singleton lists and tuples are automatically unpacked if strict == False. + if not strict: + merges = _UnpackIfSingleton(merges) + return merges def _resource_safe_shape(t): @@ -2415,8 +2454,8 @@ class WhileContext(ControlFlowContext): # Store body_result to keep track of TensorArrays returned by body original_body_result = body_result # Convert TensorArrays returned by body into their flow variables - flat_result = nest.flatten(body_result) - result = _convert_tensorarrays_to_flows(flat_result) + result = nest.map_structure(_convert_tensorarray_to_flow, + nest.flatten(body_result)) result = ops.convert_n_to_tensor_or_indexed_slices(result) # Add NextIteration and the back edges to complete the loop. @@ -2446,9 +2485,9 @@ class WhileContext(ControlFlowContext): # Keep original_loop_vars to identify which are TensorArrays original_loop_vars = loop_vars - flat_loop_vars = nest.flatten(loop_vars) # Convert TensorArrays to their flow variables - loop_vars = _convert_tensorarrays_to_flows(flat_loop_vars) + loop_vars = nest.map_structure(_convert_tensorarray_to_flow, + nest.flatten(loop_vars)) loop_vars = ops.convert_n_to_tensor_or_indexed_slices(loop_vars) try: self.Enter() @@ -2820,7 +2859,7 @@ def tuple(tensors, name=None, control_inputs=None): return tpl -def case(pred_fn_pairs, default, exclusive=False, name="case"): +def case(pred_fn_pairs, default, exclusive=False, strict=False, name="case"): """Create a case operation. The `pred_fn_pairs` parameter is a dict or list of pairs of size N. @@ -2837,6 +2876,13 @@ def case(pred_fn_pairs, default, exclusive=False, name="case"): are returned immediately. If none of the predicates evaluate to True, this operation returns the tensors generated by `default`. + `tf.case` supports nested structures as implemented in + `tensorflow.python.util.nest`. Both `fn1` and `fn2` must return the same + (possibly nested) value structure of lists, tuples, and/or named tuples. + Singleton lists and tuples form the only exceptions to this: when returned by + `fn1` and/or `fn2`, they are implicitly unpacked to single values. This + behavior is disabled by passing `strict=True`. + Example 1: Pseudocode: ``` @@ -2877,6 +2923,7 @@ def case(pred_fn_pairs, default, exclusive=False, name="case"): callable which returns a list of tensors. default: A callable that returns a list of tensors. exclusive: True iff at most one predicate is allowed to evaluate to `True`. + strict: A boolean that enables/disables 'strict' mode; see above. name: A name for this operation (optional). Returns: @@ -2941,20 +2988,31 @@ def case(pred_fn_pairs, default, exclusive=False, name="case"): # Create an empty tensor, or list, with the right type and shape with ops.name_scope("case_create_empty"): - dummy_value = default() + def _create_empty_constant(dtype, shape): + value = ("" if dtype == dtypes.string else dtype.as_numpy_dtype()) + if shape.ndims is None: + return array_ops.constant(value, dtype=dtype) + else: + temp_shape = [1 if x.value is None else x.value for x in shape] + result = array_ops.constant(value, shape=temp_shape, dtype=dtype) + result._shape = shape # pylint: disable=protected-access + return result + def _correct_empty(v): if isinstance(v, ops.Operation): return no_op() - elif v.dtype == dtypes.string: - return array_ops.constant("") + elif isinstance(v, tensor_array_ops.TensorArray): + return v + elif not hasattr(v, "dtype"): + return ops.convert_to_tensor(v) + elif isinstance(v, sparse_tensor.SparseTensor): + return sparse_tensor.SparseTensor(indices=[[0] * len(v.get_shape())], + values=[v.dtype.as_numpy_dtype()], + dense_shape=v.get_shape()) else: - return array_ops.constant(v.dtype.as_numpy_dtype()) + return _create_empty_constant(v.dtype, v.get_shape()) - if isinstance(dummy_value, collections.Sequence): - dummy_type = type(dummy_value) - empty = lambda: dummy_type(_correct_empty(v) for v in dummy_value) - else: - empty = lambda: _correct_empty(dummy_value) + empty = lambda: nest.map_structure(_correct_empty, default()) # case_sequence = [ # cond(~p3 & ~p2 & ~p1, default, empty), @@ -2972,7 +3030,7 @@ def case(pred_fn_pairs, default, exclusive=False, name="case"): prev_case = cond( cp, fn, empty if i == 0 else lambda: prev_case, - name="If_%d" % i) + strict=strict, name="If_%d" % i) return prev_case if exclusive: @@ -2994,6 +3052,8 @@ def case(pred_fn_pairs, default, exclusive=False, name="case"): else: case_seq = _build_case() + if not strict: + case_seq = _UnpackIfSingleton(case_seq) return case_seq diff --git a/tensorflow/python/ops/control_flow_ops_test.py b/tensorflow/python/ops/control_flow_ops_test.py index 9037dd042dd..a88143224f8 100644 --- a/tensorflow/python/ops/control_flow_ops_test.py +++ b/tensorflow/python/ops/control_flow_ops_test.py @@ -18,11 +18,16 @@ from __future__ import absolute_import from __future__ import division from __future__ import print_function +import collections +import numpy as np + from tensorflow.core.framework import graph_pb2 from tensorflow.core.framework import node_def_pb2 from tensorflow.python.framework import constant_op from tensorflow.python.framework import dtypes from tensorflow.python.framework import ops +from tensorflow.python.framework import sparse_tensor +from tensorflow.python.framework import tensor_shape from tensorflow.python.framework.test_util import TensorFlowTestCase from tensorflow.python.ops import array_ops from tensorflow.python.ops import control_flow_ops @@ -37,9 +42,14 @@ from tensorflow.python.ops import variables import tensorflow.python.ops.tensor_array_grad # pylint: disable=unused-import from tensorflow.python.platform import googletest from tensorflow.python.training import momentum +from tensorflow.python.util import nest from tensorflow.python.util.protobuf import compare +TestTuple = collections.namedtuple("TestTuple", "a b") +SingletonTestTuple = collections.namedtuple("SingletonTestTuple", "a") + + class GroupTestCase(TensorFlowTestCase): def _StripNode(self, nd): @@ -334,5 +344,340 @@ class ContextTest(TensorFlowTestCase): control_flow_ops.WhileContext.from_proto(c.to_proto()).to_proto()) +def _GetNestedShape(nested): + def _GetShape(tensor): + if isinstance(tensor, tensor_array_ops.TensorArray): + return tensor_array_ops.TensorArray + elif isinstance(tensor, ops.IndexedSlices): + return tensor.dense_shape + else: + return tensor.get_shape() + + return nest.map_structure(_GetShape, nested) + + +def _CreateTensorArray(size, shape): + ta = tensor_array_ops.TensorArray(dtype=dtypes.float32, size=size, + clear_after_read=False) + for i in range(size): + ta = ta.write(i, array_ops.zeros(shape)) + return ta + + +def _RawNestedShape(nested_shape): + def _RawShape(shape): + if isinstance(shape, tensor_shape.TensorShape) and shape.ndims is not None: + return [x.value for x in shape] + else: + return None + return nest.map_structure(_RawShape, nested_shape) + + +# TODO(yori): Add tests for indexed slices. +class DataTypesTest(TensorFlowTestCase): + + def assertAllEqualNested(self, a, b): + if isinstance(a, (list, tuple)): + for entry_a, entry_b in zip(a, b): + self.assertAllEqualNested(entry_a, entry_b) + else: + self.assertAllEqual(a, b) + + def _testShape(self, fn_true, fn_false, expected_shape, + strict=False): + condition = array_ops.placeholder(dtypes.bool) + output_cond = control_flow_ops.cond(condition, fn_true, fn_false, + strict=strict) + self.assertEqual(_RawNestedShape(_GetNestedShape(output_cond)), + _RawNestedShape(expected_shape)) + + output_case = control_flow_ops.case([(condition, fn_true)], fn_false, + strict=strict) + self.assertEqual(_RawNestedShape(_GetNestedShape(output_case)), + _RawNestedShape(expected_shape)) + + def _testReturnValues(self, fn_true, fn_false, expected_value_true, + expected_value_false, strict=False, + check_cond=True): + condition = array_ops.placeholder(dtypes.bool) + output_cond = control_flow_ops.cond(condition, fn_true, fn_false, + strict=strict) + output_case = control_flow_ops.case([(condition, fn_true)], fn_false, + strict=strict) + + with self.test_session() as sess: + variables.global_variables_initializer().run() + result_cond, result_case = sess.run([output_cond, output_case], + feed_dict={condition: True}) + self.assertAllEqualNested(result_cond, expected_value_true) + if check_cond: + self.assertAllEqualNested(result_case, expected_value_true) + result_cond, result_case = sess.run([output_cond, output_case], + feed_dict={condition: False}) + self.assertAllEqualNested(result_cond, expected_value_false) + if check_cond: + self.assertAllEqualNested(result_case, expected_value_false) + + def test_int(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: 1 + fn_false = lambda: 2 + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, 1, 2) + self._testShape(fn_true, fn_false, shape, strict=True) + self._testReturnValues(fn_true, fn_false, 1, 2, strict=True) + + def test_float(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: 1.0 + fn_false = lambda: 2.0 + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, 1.0, 2.0) + + def test_noop(self): + shape = tensor_shape.TensorShape(None) + self._testShape(control_flow_ops.no_op, control_flow_ops.no_op, shape) + self._testReturnValues(control_flow_ops.no_op, control_flow_ops.no_op, + True, False, check_cond=False) + + def test_string(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: "abc" + fn_false = lambda: "xyz" + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, b"abc", b"xyz") + + def test_variable(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: variables.Variable(3.0) + fn_false = lambda: variables.Variable(4.0) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, 3.0, 4.0) + + def test_none(self): + fn_none = lambda: None + fn_tensor = lambda: constant_op.constant(1) + + with self.assertRaises(ValueError): + control_flow_ops.cond(constant_op.constant(True), fn_none, fn_tensor) + + with self.assertRaises(ValueError): + control_flow_ops.cond(constant_op.constant(True), fn_tensor, fn_none) + + def test_tensors(self): + def _BuildTrueBranch(dtype): + def _Build(): + return (array_ops.zeros([2, 2], dtype=dtype), + array_ops.ones([3, 3], dtype=dtype)) + return _Build + + def _BuildFalseBranch(dtype): + def _Build(): + return (array_ops.ones([2, 2], dtype=dtype), + array_ops.zeros([3, 3], dtype=dtype)) + return _Build + + for dtype in (dtypes.float16, dtypes.int8, dtypes.int32, dtypes.uint8): + shape = (tensor_shape.TensorShape([2, 2]), + tensor_shape.TensorShape([3, 3])) + fn_true = _BuildTrueBranch(dtype) + fn_false = _BuildFalseBranch(dtype) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, + (np.zeros([2, 2]), np.ones([3, 3])), + (np.ones([2, 2]), np.zeros([3, 3]))) + + def test_tensors_unknown_shape(self): + def _BuildTrueBranch(dtype): + def _Build(): + tensor = array_ops.zeros([2, 2], dtype=dtype) + tensor._shape = tensor_shape.TensorShape(None) + return tensor + return _Build + + def _BuildFalseBranch(dtype): + def _Build(): + tensor = array_ops.ones([2, 2], dtype=dtype) + tensor._shape = tensor_shape.TensorShape(None) + return tensor + return _Build + + for dtype in (dtypes.float16, dtypes.int8, dtypes.int32, dtypes.uint8): + shape = tensor_shape.TensorShape(None) + fn_true = _BuildTrueBranch(dtype) + fn_false = _BuildFalseBranch(dtype) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, + np.zeros([2, 2]), np.ones([2, 2])) + + def test_sparse_tensors(self): + shape = tensor_shape.TensorShape([None, None]) + + def FnTrue(): + return [sparse_tensor.SparseTensor(indices=[[0, 0], [1, 2]], + values=[1, 2], dense_shape=[3, 4])] + + def FnFalse(): + return [sparse_tensor.SparseTensor(indices=[[0, 0], [2, 1]], + values=[3, 4], dense_shape=[3, 4])] + + value1 = sparse_tensor.SparseTensorValue(indices=[[0, 0], [1, 2]], + values=[1, 2], dense_shape=[3, 4]) + value2 = sparse_tensor.SparseTensorValue(indices=[[0, 0], [2, 1]], + values=[3, 4], dense_shape=[3, 4]) + self._testShape(FnTrue, FnFalse, shape) + self._testReturnValues(FnTrue, FnFalse, value1, value2) + self._testShape(FnTrue, FnFalse, [shape], strict=True) + self._testReturnValues(FnTrue, FnFalse, [value1], [value2], strict=True) + + def test_tensors_with_partially_specified_shapes(self): + def _BuildBranch(dtype, shape): + def _Build(): + a = array_ops.zeros([2, 2], dtype=dtype) + b = array_ops.zeros([5], dtype=dtype) + c = array_ops.ones([3, 3], dtype=dtype) + a._shape = tensor_shape.TensorShape(shape[0]) + b._shape = tensor_shape.TensorShape(shape[1]) + c._shape = tensor_shape.TensorShape(shape[2]) + return a, b, c + return _Build + + for dtype in (dtypes.float16, dtypes.int8, dtypes.int32, dtypes.uint8): + shape = (tensor_shape.TensorShape([None, 2]), + tensor_shape.TensorShape([None]), + tensor_shape.TensorShape([3, None])) + fn_true = _BuildBranch(dtype, shape) + fn_false = _BuildBranch(dtype, shape) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, + (np.zeros([2, 2]), np.zeros(5), np.ones([3, 3])), + (np.zeros([2, 2]), np.zeros(5), np.ones([3, 3]))) + + def test_tensor_arrays(self): + element_shape = tensor_shape.TensorShape([2]) + ta1 = _CreateTensorArray(4, element_shape) + ta2 = _CreateTensorArray(4, element_shape) + shape = tensor_array_ops.TensorArray + fn_true = lambda: ta1 + fn_false = lambda: ta2 + self._testShape(fn_true, fn_false, shape) + + def test_tensor_array_reads(self): + shape = tensor_shape.TensorShape([2]) + ta = _CreateTensorArray(4, shape) + fn_true = lambda: ta.read(0) + fn_false = lambda: ta.read(1) + self._testShape(fn_true, fn_false, shape) + + def test_list(self): + shape = [tensor_shape.TensorShape([]), tensor_shape.TensorShape([]), + tensor_shape.TensorShape([])] + fn_true = lambda: [constant_op.constant(1), 2, variables.Variable(3.0)] + fn_false = lambda: [constant_op.constant(3), 4, variables.Variable(5.0)] + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, [1, 2, 3.0], [3, 4, 5.0]) + + def test_non_strict(self): + shape = tensor_shape.TensorShape([]) + fn_tensor = lambda: constant_op.constant(1) + fn_list = lambda: [constant_op.constant(2)] + fn_tuple = lambda: (constant_op.constant(3),) + self._testShape(fn_tensor, fn_list, shape) + self._testShape(fn_tensor, fn_tuple, shape) + self._testShape(fn_list, fn_tuple, shape) + self._testReturnValues(fn_tensor, fn_list, 1, 2) + self._testReturnValues(fn_tensor, fn_tuple, 1, 3) + self._testReturnValues(fn_list, fn_tuple, 2, 3) + + def test_singleton_strict(self): + fn_tensor = lambda: constant_op.constant(1) + fn_list = lambda: [constant_op.constant(2)] + fn_tuple = lambda: (constant_op.constant(3),) + + with self.assertRaises(ValueError): + control_flow_ops.cond(constant_op.constant(True), fn_tensor, fn_list, + strict=True) + + with self.assertRaises(TypeError): + control_flow_ops.cond(constant_op.constant(True), fn_list, fn_tuple, + strict=True) + + with self.assertRaises(ValueError): + control_flow_ops.case([(constant_op.constant(True), fn_tensor)], fn_list, + strict=True) + + with self.assertRaises(TypeError): + control_flow_ops.case([(constant_op.constant(True), fn_list)], fn_tuple, + strict=True) + + def test_singleton_list(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: [constant_op.constant(1)] + fn_false = lambda: [constant_op.constant(3)] + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, 1, 3) + self._testShape(fn_true, fn_false, [shape], strict=True) + self._testReturnValues(fn_true, fn_false, [1], [3], strict=True) + + def test_singleton_tuple(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: (constant_op.constant(1),) + fn_false = lambda: (constant_op.constant(3),) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, 1, 3) + self._testShape(fn_true, fn_false, (shape,), strict=True) + self._testReturnValues(fn_true, fn_false, (1,), (3,), + strict=True) + + def test_singleton_namedtuple(self): + shape = tensor_shape.TensorShape([]) + fn_true = lambda: SingletonTestTuple(constant_op.constant(1)) + fn_false = lambda: SingletonTestTuple(constant_op.constant(3)) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, 1, 3) + self._testShape(fn_true, fn_false, SingletonTestTuple(shape), + strict=True) + self._testReturnValues(fn_true, fn_false, SingletonTestTuple(1), + SingletonTestTuple(3), strict=True) + + def test_tuple(self): + shape = (tensor_shape.TensorShape([]), tensor_shape.TensorShape([])) + fn_true = lambda: (constant_op.constant(1), 2) + fn_false = lambda: (constant_op.constant(3), 4) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, (1, 2), (3, 4)) + + def test_namedtuple(self): + shape = TestTuple(tensor_shape.TensorShape([]), + tensor_shape.TensorShape([])) + fn_true = lambda: TestTuple(constant_op.constant(1), 2) + fn_false = lambda: TestTuple(constant_op.constant(3), 4) + self._testShape(fn_true, fn_false, shape) + self._testReturnValues(fn_true, fn_false, TestTuple(1, 2), TestTuple(3, 4)) + + def test_nested(self): + shape = [tensor_shape.TensorShape([]), + TestTuple(tensor_shape.TensorShape([]), + [tensor_shape.TensorShape([]), + tensor_shape.TensorShape([])]), + tensor_shape.TensorShape([5, 5]), + tensor_shape.TensorShape([])] + + def FnTrue(): + return [constant_op.constant(1), + TestTuple(constant_op.constant(2), [3, 4]), + array_ops.zeros([5, 5]), 6] + + def FnFalse(): + return [constant_op.constant(11), + TestTuple(constant_op.constant(12), [13, 14]), + array_ops.ones([5, 5]), 16] + + self._testShape(FnTrue, FnFalse, shape) + self._testReturnValues(FnTrue, FnFalse, + [1, TestTuple(2, [3, 4]), np.zeros([5, 5]), 6], + [11, TestTuple(12, [13, 14]), np.ones([5, 5]), 16]) + + if __name__ == "__main__": googletest.main() diff --git a/tensorflow/python/ops/embedding_ops.py b/tensorflow/python/ops/embedding_ops.py index 2aeb9ce14d3..168ca7fefcc 100644 --- a/tensorflow/python/ops/embedding_ops.py +++ b/tensorflow/python/ops/embedding_ops.py @@ -33,16 +33,16 @@ from tensorflow.python.ops import variables from tensorflow.python.platform import tf_logging as logging -def _do_gather(params, ids, validate_indices=True, name=None): +def _do_gather(params, ids, name=None): """Deals with doing gather differently for resource variables.""" if isinstance(params, resource_variable_ops.ResourceVariable): return params.sparse_read(ids, name=name) - return array_ops.gather( - params, ids, name=name, validate_indices=validate_indices) + return array_ops.gather(params, ids, name=name) def embedding_lookup(params, ids, partition_strategy="mod", name=None, - validate_indices=True, max_norm=None): + validate_indices=True, # pylint: disable=unused-argument + max_norm=None): """Looks up `ids` in a list of embedding tensors. This function is used to perform parallel lookups on the list of @@ -82,7 +82,10 @@ def embedding_lookup(params, ids, partition_strategy="mod", name=None, if `len(params) > 1`. Currently `"div"` and `"mod"` are supported. Default is `"mod"`. name: A name for the operation (optional). - validate_indices: Whether or not to validate gather indices. + validate_indices: DEPRECATED. If this operation is assigned to CPU, values + in `indices` are always validated to be within range. If assigned to GPU, + out-of-bound indices result in safe but unspecified behavior, which may + include raising an error. max_norm: If not None, embedding values are l2-normalized to the value of max_norm. @@ -92,7 +95,7 @@ def embedding_lookup(params, ids, partition_strategy="mod", name=None, Raises: ValueError: If `params` is empty. """ - if params is None or params == []: # pylint: disable=g-explicit-bool-comparison + if params in (None, (), []): raise ValueError("Need at least one param") if isinstance(params, variables.PartitionedVariable): params = list(params) # Iterate to get the underlying Variables. @@ -114,9 +117,7 @@ def embedding_lookup(params, ids, partition_strategy="mod", name=None, params = ops.convert_n_to_tensor_or_indexed_slices(params, name="params") if np == 1: with ops.colocate_with(params[0]): - return maybe_normalize( - _do_gather( - params[0], ids, validate_indices=validate_indices, name=name)) + return maybe_normalize(_do_gather(params[0], ids, name=name)) else: ids = ops.convert_to_tensor(ids, name="ids") flat_ids = array_ops.reshape(ids, [-1]) @@ -176,9 +177,7 @@ def embedding_lookup(params, ids, partition_strategy="mod", name=None, partitioned_result = [] for p in xrange(np): with ops.colocate_with(params[p]): - partitioned_result.append( - _do_gather(params[p], gather_ids[p], - validate_indices=validate_indices)) + partitioned_result.append(_do_gather(params[p], gather_ids[p])) # Stitch these back together ret = data_flow_ops.dynamic_stitch(pindices, partitioned_result, name=name) diff --git a/tensorflow/python/tools/inspect_checkpoint.py b/tensorflow/python/tools/inspect_checkpoint.py index 6faf570de72..9942a5d9086 100644 --- a/tensorflow/python/tools/inspect_checkpoint.py +++ b/tensorflow/python/tools/inspect_checkpoint.py @@ -95,7 +95,7 @@ def parse_numpy_printoption(kv_str): "Setting '%s' from the command line is not supported." % k) try: v = (v_type(v_str) if v_type is not bool - else flags.BooleanParser().Parse(v_str)) + else flags.BooleanParser().parse(v_str)) except ValueError as e: raise argparse.ArgumentTypeError(e.message) np.set_printoptions(**{k: v}) diff --git a/tensorflow/python/training/coordinator.py b/tensorflow/python/training/coordinator.py index 2863afb21e2..fea2f8240ee 100644 --- a/tensorflow/python/training/coordinator.py +++ b/tensorflow/python/training/coordinator.py @@ -106,7 +106,7 @@ class Coordinator(object): After a thread has called `coord.request_stop()` the other threads have a fixed time to stop, this is called the 'stop grace period' and defaults to 2 minutes. If any of the threads is still alive after the grace period expires - `coord.join()` raises a RuntimeException reporting the laggards. + `coord.join()` raises a RuntimeError reporting the laggards. ```python try: @@ -117,7 +117,7 @@ class Coordinator(object): ...start thread N...(coord, ...) # Wait for all the threads to terminate, give them 10s grace period coord.join(threads, stop_grace_period_secs=10) - except RuntimeException: + except RuntimeError: ...one of the threads took more than 10s to stop after request_stop() ...was called. except Exception: diff --git a/tensorflow/python/training/training.py b/tensorflow/python/training/training.py index 28c0668d24e..f2bcc561753 100644 --- a/tensorflow/python/training/training.py +++ b/tensorflow/python/training/training.py @@ -68,6 +68,7 @@ See the @{$python/train} guide. @@LoggingTensorHook @@StopAtStepHook @@CheckpointSaverHook +@@CheckpointSaverListener @@NewCheckpointReader @@StepCounterHook @@NanLossDuringTrainingError @@ -128,6 +129,7 @@ from tensorflow.python.training.basic_session_run_hooks import SecondOrStepTimer from tensorflow.python.training.basic_session_run_hooks import LoggingTensorHook from tensorflow.python.training.basic_session_run_hooks import StopAtStepHook from tensorflow.python.training.basic_session_run_hooks import CheckpointSaverHook +from tensorflow.python.training.basic_session_run_hooks import CheckpointSaverListener from tensorflow.python.training.basic_session_run_hooks import StepCounterHook from tensorflow.python.training.basic_session_run_hooks import NanLossDuringTrainingError from tensorflow.python.training.basic_session_run_hooks import NanTensorHook diff --git a/tensorflow/tensorboard/BUILD b/tensorflow/tensorboard/BUILD index ea409a93124..9772538524e 100644 --- a/tensorflow/tensorboard/BUILD +++ b/tensorflow/tensorboard/BUILD @@ -39,6 +39,8 @@ py_binary( "//tensorflow/python:platform", "//tensorflow/tensorboard/backend:application", "//tensorflow/tensorboard/backend/event_processing:event_file_inspector", + "//tensorflow/tensorboard/plugins/projector:projector_plugin", + "//tensorflow/tensorboard/plugins/text:text_plugin", "@org_pocoo_werkzeug//:werkzeug", ], ) diff --git a/tensorflow/tensorboard/backend/BUILD b/tensorflow/tensorboard/backend/BUILD index 4e1db853744..d27a22a82b3 100644 --- a/tensorflow/tensorboard/backend/BUILD +++ b/tensorflow/tensorboard/backend/BUILD @@ -65,9 +65,6 @@ py_library( "//tensorflow/python:platform", "//tensorflow/tensorboard/backend/event_processing:event_accumulator", "//tensorflow/tensorboard/backend/event_processing:event_multiplexer", - "//tensorflow/tensorboard/plugins/debugger:debugger_plugin", - "//tensorflow/tensorboard/plugins/projector:projector_plugin", - "//tensorflow/tensorboard/plugins/text:text_plugin", "@org_pocoo_werkzeug//:werkzeug", "@six_archive//:six", ], diff --git a/tensorflow/tensorboard/backend/application.py b/tensorflow/tensorboard/backend/application.py index 974762822fc..3c8963e302f 100644 --- a/tensorflow/tensorboard/backend/application.py +++ b/tensorflow/tensorboard/backend/application.py @@ -43,9 +43,6 @@ from tensorflow.tensorboard.backend import http_util from tensorflow.tensorboard.backend import process_graph from tensorflow.tensorboard.backend.event_processing import event_accumulator from tensorflow.tensorboard.backend.event_processing import event_multiplexer -from tensorflow.tensorboard.plugins.debugger import debugger_plugin -from tensorflow.tensorboard.plugins.projector import projector_plugin -from tensorflow.tensorboard.plugins.text import text_plugin DEFAULT_SIZE_GUIDANCE = { @@ -97,18 +94,27 @@ class _OutputFormat(object): CSV = 'csv' -def standard_tensorboard_wsgi(logdir, purge_orphaned_data, reload_interval): - """Construct a TensorBoardWSGIApp with standard plugins and multiplexer.""" +def standard_tensorboard_wsgi( + logdir, + purge_orphaned_data, + reload_interval, + plugins): + """Construct a TensorBoardWSGIApp with standard plugins and multiplexer. + + Args: + logdir: The path to the directory containing events files. + purge_orphaned_data: Whether to purge orphaned data. + reload_interval: The interval at which the backend reloads more data in + seconds. + plugins: A list of plugins for TensorBoard to initialize. + + Returns: + The new TensorBoard WSGI application. + """ multiplexer = event_multiplexer.EventMultiplexer( size_guidance=DEFAULT_SIZE_GUIDANCE, purge_orphaned_data=purge_orphaned_data) - plugins = [ - debugger_plugin.DebuggerPlugin(), - projector_plugin.ProjectorPlugin(), - text_plugin.TextPlugin(), - ] - return TensorBoardWSGIApp(logdir, plugins, multiplexer, reload_interval) diff --git a/tensorflow/tensorboard/backend/application_test.py b/tensorflow/tensorboard/backend/application_test.py index 002709cd5b0..a5181401fa2 100644 --- a/tensorflow/tensorboard/backend/application_test.py +++ b/tensorflow/tensorboard/backend/application_test.py @@ -54,15 +54,18 @@ from tensorflow.tensorboard.plugins import base_plugin class FakePlugin(base_plugin.TBPlugin): """A plugin with no functionality.""" - def __init__(self, plugin_name, is_active_value): + def __init__(self, plugin_name, is_active_value, routes_mapping): """Constructs a fake plugin. Args: plugin_name: The name of this plugin. is_active_value: Whether the plugin is active. + routes_mapping: A dictionary mapping from route (string URL path) to the + method called when a user issues a request to that route. """ self.plugin_name = plugin_name self._is_active_value = is_active_value + self._routes_mapping = routes_mapping def get_plugin_apps(self, multiplexer, logdir): """Returns a mapping from routes to handlers offered by this plugin. @@ -72,9 +75,9 @@ class FakePlugin(base_plugin.TBPlugin): logdir: The path to the directory containing logs. Returns: - An empty dict. This plugin offers no routes. + A dictionary mapping from routes to handlers offered by this plugin. """ - return {} + return self._routes_mapping def is_active(self): """Returns whether this plugin is active. @@ -97,8 +100,8 @@ class TensorboardServerTest(test.TestCase): size_guidance=application.DEFAULT_SIZE_GUIDANCE, purge_orphaned_data=True) plugins = [ - FakePlugin(plugin_name='foo', is_active_value=True), - FakePlugin(plugin_name='bar', is_active_value=False) + FakePlugin(plugin_name='foo', is_active_value=True, routes_mapping={}), + FakePlugin(plugin_name='bar', is_active_value=False, routes_mapping={}) ] app = application.TensorBoardWSGIApp( self.temp_dir, plugins, multiplexer, reload_interval=0) @@ -476,10 +479,41 @@ class TensorBoardAssetsTest(test.TestCase): def testTagFound(self): tag = application.get_tensorboard_tag() self.assertTrue(tag) - app = application.standard_tensorboard_wsgi('', True, 60) + app = application.standard_tensorboard_wsgi('', True, 60, []) self.assertEqual(app.tag, tag) +class TensorBoardPluginsTest(test.TestCase): + + def testPluginsAdded(self): + + def foo_handler(): + pass + + def bar_handler(): + pass + + plugins = [ + FakePlugin( + plugin_name='foo', + is_active_value=True, + routes_mapping={'/foo_route': foo_handler}), + FakePlugin( + plugin_name='bar', + is_active_value=True, + routes_mapping={'/bar_route': bar_handler}), + ] + + # The application should have added routes for both plugins. + app = application.standard_tensorboard_wsgi('', True, 60, plugins) + + # The routes are prefixed with /data/plugin/[plugin name]. + self.assertDictContainsSubset({ + '/data/plugin/foo/foo_route': foo_handler, + '/data/plugin/bar/bar_route': bar_handler, + }, app.data_applications) + + class TensorboardSimpleServerConstructionTest(test.TestCase): """Tests that the default HTTP server is constructed without error. @@ -533,14 +567,18 @@ class TensorBoardApplcationConstructionTest(test.TestCase): # Fails if there is an unnamed plugin with self.assertRaises(ValueError): # This plugin lacks a name. - plugins = [FakePlugin(plugin_name=None, is_active_value=True)] + plugins = [ + FakePlugin(plugin_name=None, is_active_value=True, routes_mapping={}) + ] application.TensorBoardWSGIApp(logdir, plugins, multiplexer, 0) # Fails if there are two plugins with same name with self.assertRaises(ValueError): plugins = [ - FakePlugin(plugin_name='foo', is_active_value=True), - FakePlugin(plugin_name='foo', is_active_value=True), + FakePlugin( + plugin_name='foo', is_active_value=True, routes_mapping={}), + FakePlugin( + plugin_name='foo', is_active_value=True, routes_mapping={}), ] application.TensorBoardWSGIApp(logdir, plugins, multiplexer, 0) diff --git a/tensorflow/tensorboard/components/tf_audio_dashboard/tf-audio-dashboard.html b/tensorflow/tensorboard/components/tf_audio_dashboard/tf-audio-dashboard.html index ad879210d6f..2088cde2787 100644 --- a/tensorflow/tensorboard/components/tf_audio_dashboard/tf-audio-dashboard.html +++ b/tensorflow/tensorboard/components/tf_audio_dashboard/tf-audio-dashboard.html @@ -59,12 +59,16 @@ tf-audio-dashboard displays a dashboard that loads audio from a TensorFlow run. diff --git a/tensorflow/tensorboard/components/tf_dashboard_common/tf-run-selector.html b/tensorflow/tensorboard/components/tf_dashboard_common/tf-run-selector.html index 8f2ea402e89..81a72793b96 100644 --- a/tensorflow/tensorboard/components/tf_dashboard_common/tf-run-selector.html +++ b/tensorflow/tensorboard/components/tf_dashboard_common/tf-run-selector.html @@ -139,17 +139,9 @@ Properties out: }, }, observers: [ + "_onBackendUpdate(backend)", "_logdirSet(logdir)", ], - ready: function() { - // Populate the logdir. - this.backend.logdir().then(logdirObject => { - this.set('logdir', logdirObject.logdir); - }).catch(e => { - // Fetching the logdir failed. Prevent the exception from logging to - // console. The console already logs a 404 network event. - }); - }, _toggleAll: function() { this.$.multiCheckbox.toggleAll(); }, @@ -157,8 +149,21 @@ Properties out: _breakString: function(originalString) { return originalString.replace(/([\/=\-_,])/g, "$1"); }, + _onBackendUpdate: function(backend) { + if (backend === undefined) { + return; + } + + // When the backend is set, the selector can request the logdir. + backend.logdir().then(logdirObject => { + this.set('logdir', logdirObject.logdir); + }).catch(e => { + // Fetching the logdir failed. Prevent the exception from logging to + // console. The console already logs a 404 network event. + }); + }, _logdirSet: function(logdir) { - if (!logdir) { + if (logdir === undefined) { // The logdir has not been set yet. return; } diff --git a/tensorflow/tensorboard/components/tf_distribution_dashboard/tf-distribution-dashboard.html b/tensorflow/tensorboard/components/tf_distribution_dashboard/tf-distribution-dashboard.html index 2da848bd99e..58ae396cb45 100644 --- a/tensorflow/tensorboard/components/tf_distribution_dashboard/tf-distribution-dashboard.html +++ b/tensorflow/tensorboard/components/tf_distribution_dashboard/tf-distribution-dashboard.html @@ -101,9 +101,13 @@ contains vz-distribution-charts embedded inside tf-panes-helper's. diff --git a/tensorflow/tensorboard/components/vz_projector/vz-projector-dashboard.html b/tensorflow/tensorboard/components/vz_projector/vz-projector-dashboard.html index b641bb0f293..a641a54e418 100644 --- a/tensorflow/tensorboard/components/vz_projector/vz-projector-dashboard.html +++ b/tensorflow/tensorboard/components/vz_projector/vz-projector-dashboard.html @@ -16,6 +16,7 @@ limitations under the License. --> + @@ -37,18 +38,33 @@ limitations under the License. diff --git a/tensorflow/tensorboard/plugins/debugger/BUILD b/tensorflow/tensorboard/plugins/debugger/BUILD deleted file mode 100644 index 38aa719b9b9..00000000000 --- a/tensorflow/tensorboard/plugins/debugger/BUILD +++ /dev/null @@ -1,55 +0,0 @@ -# Description: -# TensorBoard plugin for interacting with tfdbg, the TensorFlow debugger - -package(default_visibility = ["//tensorflow:internal"]) - -licenses(["notice"]) # Apache 2.0 - -exports_files(["LICENSE"]) - -load("//tensorflow:tensorflow.bzl", "py_test") - -## TensorFlow Debugger Plugiin ## -py_library( - name = "debugger_plugin", - srcs = ["debugger_plugin.py"], - srcs_version = "PY2AND3", - deps = [ - "//tensorflow/python:framework", - "//tensorflow/python:platform", - "//tensorflow/tensorboard/backend:http_util", - "//tensorflow/tensorboard/backend/event_processing:event_accumulator", - "//tensorflow/tensorboard/backend/event_processing:event_file_loader", - "//tensorflow/tensorboard/plugins:base_plugin", - ], -) - -py_test( - name = "debugger_plugin_test", - size = "small", - srcs = ["debugger_plugin_test.py"], - main = "debugger_plugin_test.py", - srcs_version = "PY2AND3", - tags = ["no_pip"], - deps = [ - ":debugger_plugin", - "//tensorflow/core:protos_all_py", - "//tensorflow/python:client_testlib", - "//tensorflow/python:pywrap_tensorflow", - "//tensorflow/python:util", - "//tensorflow/tensorboard/backend:application", - "//tensorflow/tensorboard/backend/event_processing:event_multiplexer", - "//third_party/py/numpy", - "@org_pocoo_werkzeug//:werkzeug", - ], -) - -filegroup( - name = "all_files", - srcs = glob( - [ - "*", - ], - ), - visibility = ["//tensorflow:__subpackages__"], -) diff --git a/tensorflow/tensorboard/plugins/debugger/debugger_plugin.py b/tensorflow/tensorboard/plugins/debugger/debugger_plugin.py deleted file mode 100644 index 5d34bb91dbd..00000000000 --- a/tensorflow/tensorboard/plugins/debugger/debugger_plugin.py +++ /dev/null @@ -1,355 +0,0 @@ -# Copyright 2016 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""The plugin for serving data from a TensorFlow debugger.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import collections -import glob -import json -import os -import re - -from werkzeug import wrappers - -from tensorflow.python.framework import tensor_util -from tensorflow.python.platform import tf_logging as logging -from tensorflow.tensorboard.backend import http_util -from tensorflow.tensorboard.backend.event_processing import event_accumulator -from tensorflow.tensorboard.backend.event_processing import event_file_loader -from tensorflow.tensorboard.plugins import base_plugin - -# The prefix of routes provided by this plugin. -_PLUGIN_PREFIX_ROUTE = 'debugger' - -# HTTP routes. -_HEALTH_PILLS_ROUTE = '/health_pills' - -# The POST key of HEALTH_PILLS_ROUTE for a JSON list of node names. -_NODE_NAMES_POST_KEY = 'node_names' - -# The POST key of HEALTH_PILLS_ROUTE for the run to retrieve health pills for. -_RUN_POST_KEY = 'run' - -# The default run to retrieve health pills for. -_DEFAULT_RUN = '.' - -# The POST key of HEALTH_PILLS_ROUTE for the specific step to retrieve health -# pills for. -_STEP_POST_KEY = 'step' - -# A glob pattern for files containing debugger-related events. -_DEBUGGER_EVENTS_GLOB_PATTERN = 'events.debugger*' - - -class DebuggerPlugin(base_plugin.TBPlugin): - """TensorFlow Debugger plugin. Receives requests for debugger-related data. - - That data could include health pills, which unveil the status of tensor - values. - """ - - plugin_name = _PLUGIN_PREFIX_ROUTE - - def get_plugin_apps(self, multiplexer, logdir): - """Obtains a mapping between routes and handlers. Stores the logdir. - - Args: - multiplexer: The EventMultiplexer that provides TB data. - logdir: The logdir string - the directory of events files. - - Returns: - A mapping between routes and handlers (functions that respond to - requests). - """ - self._event_multiplexer = multiplexer - self._logdir = logdir - return { - _HEALTH_PILLS_ROUTE: self._serve_health_pills_handler, - } - - def is_active(self): - """Determines whether this plugin is active. - - This plugin is active if any health pills information is present for any - run. This method must be called only after get_plugin_apps has been called. - - Returns: - A boolean. Whether this plugin is active. - """ - for run_name in self._event_multiplexer.Runs(): - if self._event_multiplexer.GetOpsWithHealthPills(run_name): - return True - - return False - - @wrappers.Request.application - def _serve_health_pills_handler(self, request): - """A (wrapped) werkzeug handler for serving health pills. - - Accepts POST requests and responds with health pills. The request accepts - several POST parameters: - - node_names: (required string) A JSON-ified list of node names for which - the client would like to request health pills. - run: (optional string) The run to retrieve health pills for. Defaults to - '.'. This data is sent via POST (not GET) since URL length is limited. - step: (optional integer): The session run step for which to - retrieve health pills. If provided, the handler reads the health pills - of that step from disk (which is slow) and produces a response with - only health pills at that step. If not provided, the handler returns a - response with health pills at all steps sampled by the event - multiplexer (the fast path). The motivation here is that, sometimes, - one desires to examine health pills at a specific step (to say find - the first step that causes a model to blow up with NaNs). - get_plugin_apps must be called before this slower feature is used - because that method passes the logdir (directory path) to this plugin. - - This handler responds with a JSON-ified object mapping from node names to a - list (of size 1) of health pill event objects, each of which has these - properties. - - { - 'wall_time': float, - 'step': int, - 'node_name': string, - 'output_slot': int, - # A list of 12 floats that summarizes the elements of the tensor. - 'value': float[], - } - - Node names for which there are no health pills to be found are excluded from - the mapping. - - Args: - request: The request issued by the client for health pills. - - Returns: - A werkzeug BaseResponse object. - """ - if request.method != 'POST': - logging.error( - '%s requests are forbidden by the debugger plugin.', request.method) - return wrappers.Response(status=405) - - if _NODE_NAMES_POST_KEY not in request.form: - logging.error( - 'The %r POST key was not found in the request for health pills.', - _NODE_NAMES_POST_KEY) - return wrappers.Response(status=400) - - jsonified_node_names = request.form[_NODE_NAMES_POST_KEY] - try: - node_names = json.loads(jsonified_node_names) - except Exception as e: # pylint: disable=broad-except - # Different JSON libs raise different exceptions, so we just do a - # catch-all here. This problem is complicated by how Tensorboard might be - # run in many different environments, as it is open-source. - logging.error('Could not decode node name JSON string %r: %s', - jsonified_node_names, e) - return wrappers.Response(status=400) - - if not isinstance(node_names, list): - logging.error('%r is not a JSON list of node names:', - jsonified_node_names) - return wrappers.Response(status=400) - - run = request.form.get(_RUN_POST_KEY, _DEFAULT_RUN) - step_string = request.form.get(_STEP_POST_KEY, None) - if step_string is None: - # Use all steps sampled by the event multiplexer (Relatively fast). - mapping = self._obtain_sampled_health_pills(run, node_names) - else: - # Read disk to obtain the health pills for that step (Relatively slow). - # Make sure that the directory for the run exists. - # Determine the directory of events file to read. - events_directory = self._logdir - if run != _DEFAULT_RUN: - # Use the directory for the specific run. - events_directory = os.path.join(events_directory, run) - - step = int(step_string) - try: - mapping = self._obtain_health_pills_at_step( - events_directory, node_names, step) - except IOError as error: - logging.error( - 'Error retrieving health pills for step %d: %s', step, error) - return wrappers.Response(status=404) - - # Convert event_accumulator.HealthPillEvents to JSON-able dicts. - jsonable_mapping = {} - for node_name, events in mapping.items(): - jsonable_mapping[node_name] = [e._asdict() for e in events] - return http_util.Respond(request, jsonable_mapping, 'application/json') - - def _obtain_sampled_health_pills(self, run, node_names): - """Obtains the health pills for a run sampled by the event multiplexer. - - This is much faster than the alternative path of reading health pills from - disk. - - Args: - run: The run to fetch health pills for. - node_names: A list of node names for which to retrieve health pills. - - Returns: - A dictionary mapping from node name to a list of - event_accumulator.HealthPillEvents. - """ - mapping = {} - for node_name in node_names: - try: - mapping[node_name] = self._event_multiplexer.HealthPills(run, node_name) - except KeyError: - logging.info('No health pills found for node %r.', node_name) - continue - - return mapping - - def _obtain_health_pills_at_step(self, events_directory, node_names, step): - """Reads disk to obtain the health pills for a run at a specific step. - - This could be much slower than the alternative path of just returning all - health pills sampled by the event multiplexer. It could take tens of minutes - to complete this call for large graphs for big step values (in the - thousands). - - Args: - events_directory: The directory containing events for the desired run. - node_names: A list of node names for which to retrieve health pills. - step: The step to obtain health pills for. - - Returns: - A dictionary mapping from node name to a list of health pill objects (see - docs for _serve_health_pills_handler for properties of those objects). - - Raises: - IOError: If no files with health pill events could be found. - """ - # Obtain all files with debugger-related events. - pattern = os.path.join(events_directory, _DEBUGGER_EVENTS_GLOB_PATTERN) - file_paths = glob.glob(pattern) - - if not file_paths: - raise IOError( - 'No events files found that matches the pattern %r.', pattern) - - # Sort by name (and thus by timestamp). - file_paths.sort() - - mapping = collections.defaultdict(list) - node_name_set = frozenset(node_names) - - for file_path in file_paths: - should_stop = self._process_health_pill_event( - node_name_set, mapping, step, file_path) - if should_stop: - break - - return mapping - - def _process_health_pill_event(self, node_name_set, mapping, target_step, - file_path): - """Creates health pills out of data in an event. - - Creates health pills out of the event and adds them to the mapping. - - Args: - node_name_set: A set of node names that are relevant. - mapping: The mapping from node name to event_accumulator.HealthPillEvents. - This object may be destructively modified. - target_step: The target step at which to obtain health pills. - file_path: The path to the file with health pill events. - - Returns: - Whether we should stop reading events because future events are no longer - relevant. - """ - events_loader = event_file_loader.EventFileLoader(file_path) - for event in events_loader.Load(): - if not event.HasField('summary'): - logging.warning('An event in a debugger events file lacks a summary.') - continue - - if event.step < target_step: - # This event is not of the relevant step. We perform this check - # first because the majority of events will be eliminated from - # consideration by this check. - continue - - if event.step > target_step: - # We have passed the relevant step. No need to read more events. - return True - - for value in event.summary.value: - # Since we seek health pills for a specific step, this function - # returns 1 health pill per node per step. The wall time is the - # seconds since the epoch. - health_pill = self._process_health_pill_value( - node_name_set, event.wall_time, event.step, value) - if not health_pill: - continue - mapping[health_pill.node_name].append(health_pill) - - # Keep reading events. - return False - - def _process_health_pill_value(self, node_name_set, wall_time, step, value): - """Creates a dict containing various properties of a health pill. - - Args: - node_name_set: A set of node names that are relevant. - wall_time: The wall time in seconds. - step: The session run step of the event. - value: The health pill value. - - Returns: - An event_accumulator.HealthPillEvent. Or None if one could not be created. - """ - if not value.HasField('tensor'): - logging.warning( - 'An event in a debugger events file lacks a tensor value.') - return None - - if value.tag != event_accumulator.HEALTH_PILL_EVENT_TAG: - logging.warning( - ('A debugger-related event lacks the %r tag. It instead has ' - 'the %r tag.'), event_accumulator.HEALTH_PILL_EVENT_TAG, value.tag) - return None - - match = re.match(r'^(.*):(\d+):DebugNumericSummary$', value.node_name) - if not match: - logging.warning( - ('A event with a health pill has an invalid watch, (i.e., an ' - 'unexpected debug op): %r'), value.node_name) - return None - - node_name = match.group(1) - if node_name not in node_name_set: - # This event is not relevant. - return None - - # Since we seek health pills for a specific step, this function - # returns 1 health pill per node per step. The wall time is the - # seconds since the epoch. - return event_accumulator.HealthPillEvent( - wall_time=wall_time, - step=step, - node_name=node_name, - output_slot=int(match.group(2)), - value=list(tensor_util.MakeNdarray(value.tensor))) diff --git a/tensorflow/tensorboard/plugins/debugger/debugger_plugin_test.py b/tensorflow/tensorboard/plugins/debugger/debugger_plugin_test.py deleted file mode 100644 index f1cc2e06da2..00000000000 --- a/tensorflow/tensorboard/plugins/debugger/debugger_plugin_test.py +++ /dev/null @@ -1,300 +0,0 @@ -# Copyright 2016 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests the Tensorboard debugger data plugin.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import collections -import json -import os -import shutil - -import numpy as np -from werkzeug import test as werkzeug_test -from werkzeug import wrappers - -from tensorflow.core.framework import types_pb2 -from tensorflow.core.util import event_pb2 -from tensorflow.python import pywrap_tensorflow -from tensorflow.python.platform import test -from tensorflow.python.util import compat -from tensorflow.tensorboard.backend import application -from tensorflow.tensorboard.backend.event_processing import event_multiplexer -from tensorflow.tensorboard.plugins.debugger import debugger_plugin - - -class DebuggerPluginTest(test.TestCase): - - def setUp(self): - # Populate the log directory with debugger event for run '.'. - self.log_dir = self.get_temp_dir() - file_prefix = compat.as_bytes(os.path.join(self.log_dir, 'events.debugger')) - writer = pywrap_tensorflow.EventsWriter(file_prefix) - writer.WriteEvent( - self._CreateEventWithDebugNumericSummary( - op_name='layers/Matmul', - output_slot=0, - wall_time=42, - step=2, - list_of_values=[1, 2, 3])) - writer.WriteEvent( - self._CreateEventWithDebugNumericSummary( - op_name='layers/Matmul', - output_slot=1, - wall_time=43, - step=7, - list_of_values=[4, 5, 6])) - writer.WriteEvent( - self._CreateEventWithDebugNumericSummary( - op_name='logits/Add', - output_slot=0, - wall_time=1337, - step=7, - list_of_values=[7, 8, 9])) - writer.WriteEvent( - self._CreateEventWithDebugNumericSummary( - op_name='logits/Add', - output_slot=0, - wall_time=1338, - step=8, - list_of_values=[10, 11, 12])) - writer.Close() - - # Populate the log directory with debugger event for run 'run_foo'. - run_foo_directory = os.path.join(self.log_dir, 'run_foo') - os.mkdir(run_foo_directory) - file_prefix = compat.as_bytes( - os.path.join(run_foo_directory, 'events.debugger')) - writer = pywrap_tensorflow.EventsWriter(file_prefix) - writer.WriteEvent( - self._CreateEventWithDebugNumericSummary( - op_name='layers/Variable', - output_slot=0, - wall_time=4242, - step=42, - list_of_values=[13, 14, 15])) - writer.Close() - - # Start a server that will receive requests and respond with health pills. - self.multiplexer = event_multiplexer.EventMultiplexer({ - '.': self.log_dir, - 'run_foo': run_foo_directory, - }) - self.plugin = debugger_plugin.DebuggerPlugin() - wsgi_app = application.TensorBoardWSGIApp( - self.log_dir, [self.plugin], - self.multiplexer, - reload_interval=0) - self.server = werkzeug_test.Client(wsgi_app, wrappers.BaseResponse) - - def tearDown(self): - # Remove the directory with debugger-related events files. - shutil.rmtree(self.log_dir, ignore_errors=True) - - def _CreateEventWithDebugNumericSummary( - self, op_name, output_slot, wall_time, step, list_of_values): - """Creates event with a health pill summary. - - Args: - op_name: The name of the op to which a DebugNumericSummary was attached. - output_slot: The numeric output slot for the tensor. - wall_time: The numeric wall time of the event. - step: The step of the event. - list_of_values: A python list of values within the tensor. - - Returns: - A event_pb2.Event with a health pill summary. - """ - event = event_pb2.Event(step=step, wall_time=wall_time) - value = event.summary.value.add( - tag='__health_pill__', - node_name='%s:%d:DebugNumericSummary' % (op_name, output_slot)) - value.tensor.tensor_shape.dim.add(size=len(list_of_values)) - value.tensor.dtype = types_pb2.DT_DOUBLE - value.tensor.tensor_content = np.array( - list_of_values, dtype=np.float64).tobytes() - return event - - def _DeserializeResponse(self, byte_content): - """Deserializes byte content that is a JSON encoding. - - Args: - byte_content: The byte content of a JSON response. - - Returns: - The deserialized python object decoded from JSON. - """ - return json.loads(byte_content.decode('utf-8')) - - def testHealthPillsRouteProvided(self): - """Tests that the plugin offers the route for requesting health pills.""" - apps = self.plugin.get_plugin_apps(self.multiplexer, self.log_dir) - self.assertIn('/health_pills', apps) - self.assertIsInstance(apps['/health_pills'], collections.Callable) - - def testHealthPillsPluginIsActive(self): - self.plugin.get_plugin_apps(self.multiplexer, self.log_dir) - - # The multiplexer has sampled health pills. - self.assertTrue(self.plugin.is_active()) - - def testHealthPillsPluginIsInactive(self): - self.plugin.get_plugin_apps( - event_multiplexer.EventMultiplexer({}), self.log_dir) - - # The multiplexer lacks sampled health pills. - self.assertFalse(self.plugin.is_active()) - - def testRequestHealthPillsForRunFoo(self): - """Tests that the plugin produces health pills for a specified run.""" - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': json.dumps(['layers/Variable', 'unavailable_node']), - 'run': 'run_foo', - }) - self.assertEqual(200, response.status_code) - self.assertDictEqual({ - 'layers/Variable': [{ - 'wall_time': 4242, - 'step': 42, - 'node_name': 'layers/Variable', - 'output_slot': 0, - 'value': [13, 14, 15], - }], - }, self._DeserializeResponse(response.get_data())) - - def testRequestHealthPillsForDefaultRun(self): - """Tests that the plugin produces health pills for the default '.' run.""" - # Do not provide a 'run' parameter in POST data. - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': json.dumps(['logits/Add', 'unavailable_node']), - }) - self.assertEqual(200, response.status_code) - # The health pills for 'layers/Matmul' should not be included since the - # request excluded that node name. - self.assertDictEqual({ - 'logits/Add': [ - { - 'wall_time': 1337, - 'step': 7, - 'node_name': 'logits/Add', - 'output_slot': 0, - 'value': [7, 8, 9], - }, - { - 'wall_time': 1338, - 'step': 8, - 'node_name': 'logits/Add', - 'output_slot': 0, - 'value': [10, 11, 12], - }, - ], - }, self._DeserializeResponse(response.get_data())) - - def testGetRequestsUnsupported(self): - """Tests that GET requests are unsupported.""" - response = self.server.get('/data/plugin/debugger/health_pills') - self.assertEqual(405, response.status_code) - - def testRequestsWithoutProperPostKeyUnsupported(self): - """Tests that requests lacking the node_names POST key are unsupported.""" - response = self.server.post('/data/plugin/debugger/health_pills') - self.assertEqual(400, response.status_code) - - def testRequestsWithBadJsonUnsupported(self): - """Tests that requests with undecodable JSON are unsupported.""" - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': 'some obviously non JSON text', - }) - self.assertEqual(400, response.status_code) - - def testRequestsWithNonListPostDataUnsupported(self): - """Tests that requests with loads lacking lists of ops are unsupported.""" - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': json.dumps({ - 'this is a dict': 'and not a list.' - }), - }) - self.assertEqual(400, response.status_code) - - def testFetchHealthPillsForSpecificStep(self): - """Tests that requesting health pills at a specific steps works. - - This path may be slow in real life because it reads from disk. - """ - # Request health pills for these nodes at step 7 specifically. - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': json.dumps(['logits/Add', 'layers/Matmul']), - 'step': 7 - }) - self.assertEqual(200, response.status_code) - # The response should only include health pills at step 7. - self.assertDictEqual({ - 'logits/Add': [ - { - 'wall_time': 1337, - 'step': 7, - 'node_name': 'logits/Add', - 'output_slot': 0, - 'value': [7, 8, 9], - }, - ], - 'layers/Matmul': [ - { - 'wall_time': 43, - 'step': 7, - 'node_name': 'layers/Matmul', - 'output_slot': 1, - 'value': [4, 5, 6], - }, - ], - }, self._DeserializeResponse(response.get_data())) - - def testNoHealthPillsForSpecificStep(self): - """Tests that an empty mapping is returned for no health pills at a step.""" - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': json.dumps(['some/clearly/non-existent/op']), - 'step': 7 - }) - self.assertEqual(200, response.status_code) - self.assertDictEqual({}, self._DeserializeResponse(response.get_data())) - - def testNoHealthPillsForOutOfRangeStep(self): - """Tests that an empty mapping is returned for an out of range step.""" - response = self.server.post( - '/data/plugin/debugger/health_pills', - data={ - 'node_names': json.dumps(['logits/Add', 'layers/Matmul']), - # This step higher than that of any event written to disk. - 'step': 42424242 - }) - self.assertEqual(200, response.status_code) - self.assertDictEqual({}, self._DeserializeResponse(response.get_data())) - -if __name__ == '__main__': - test.main() diff --git a/tensorflow/tensorboard/tensorboard.py b/tensorflow/tensorboard/tensorboard.py index f3900d1e5df..f371a01f35b 100644 --- a/tensorflow/tensorboard/tensorboard.py +++ b/tensorflow/tensorboard/tensorboard.py @@ -32,7 +32,8 @@ from tensorflow.python.platform import flags from tensorflow.python.platform import tf_logging as logging from tensorflow.tensorboard.backend import application from tensorflow.tensorboard.backend.event_processing import event_file_inspector as efi - +from tensorflow.tensorboard.plugins.projector import projector_plugin +from tensorflow.tensorboard.plugins.text import text_plugin # TensorBoard flags @@ -88,8 +89,18 @@ flags.DEFINE_string( FLAGS = flags.FLAGS -def create_tb_app(): - """Read the flags, and create a TensorBoard WSGI application.""" +def create_tb_app(plugins): + """Read the flags, and create a TensorBoard WSGI application. + + Args: + plugins: A list of plugins for TensorBoard to initialize. + + Raises: + ValueError: if a logdir is not specified. + + Returns: + A new TensorBoard WSGI application. + """ if not FLAGS.logdir: raise ValueError('A logdir must be specified. Run `tensorboard --help` for ' 'details and examples.') @@ -98,7 +109,8 @@ def create_tb_app(): return application.standard_tensorboard_wsgi( logdir=logdir, purge_orphaned_data=FLAGS.purge_orphaned_data, - reload_interval=FLAGS.reload_interval) + reload_interval=FLAGS.reload_interval, + plugins=plugins) def make_simple_server(tb_app, host, port): @@ -184,7 +196,11 @@ def main(unused_argv=None): efi.inspect(FLAGS.logdir, event_file, FLAGS.tag) return 0 else: - tb = create_tb_app() + plugins = [ + projector_plugin.ProjectorPlugin(), + text_plugin.TextPlugin(), + ] + tb = create_tb_app(plugins) run_simple_server(tb) if __name__ == '__main__': diff --git a/tensorflow/tools/api/golden/BUILD b/tensorflow/tools/api/golden/BUILD new file mode 100644 index 00000000000..08436396a6c --- /dev/null +++ b/tensorflow/tools/api/golden/BUILD @@ -0,0 +1,24 @@ +# TensorFlow API backwards compatibility test goldens. + +package( + default_visibility = ["//tensorflow/tools/api:__subpackages__"], +) + +licenses(["notice"]) # Apache 2.0 + +filegroup( + name = "api_golden", + srcs = glob(["*.pbtxt"]), +) + +filegroup( + name = "all_files", + srcs = glob( + ["**/*"], + exclude = [ + "**/METADATA", + "**/OWNERS", + ], + ), + visibility = ["//tensorflow:__subpackages__"], +) diff --git a/tensorflow/tools/api/golden/tensorflow.-aggregation-method.pbtxt b/tensorflow/tools/api/golden/tensorflow.-aggregation-method.pbtxt new file mode 100644 index 00000000000..f79029d3fe0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-aggregation-method.pbtxt @@ -0,0 +1,24 @@ +path: "tensorflow.AggregationMethod" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "ADD_N" + mtype: "" + } + member { + name: "DEFAULT" + mtype: "" + } + member { + name: "EXPERIMENTAL_ACCUMULATE_N" + mtype: "" + } + member { + name: "EXPERIMENTAL_TREE" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-attr-value.-list-value.pbtxt b/tensorflow/tools/api/golden/tensorflow.-attr-value.-list-value.pbtxt new file mode 100644 index 00000000000..0fb1aaba283 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-attr-value.-list-value.pbtxt @@ -0,0 +1,108 @@ +path: "tensorflow.AttrValue.ListValue" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "B_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FUNC_FIELD_NUMBER" + mtype: "" + } + member { + name: "F_FIELD_NUMBER" + mtype: "" + } + member { + name: "I_FIELD_NUMBER" + mtype: "" + } + member { + name: "SHAPE_FIELD_NUMBER" + mtype: "" + } + member { + name: "S_FIELD_NUMBER" + mtype: "" + } + member { + name: "TENSOR_FIELD_NUMBER" + mtype: "" + } + member { + name: "TYPE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-attr-value.pbtxt b/tensorflow/tools/api/golden/tensorflow.-attr-value.pbtxt new file mode 100644 index 00000000000..e7a3a1f02fa --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-attr-value.pbtxt @@ -0,0 +1,120 @@ +path: "tensorflow.AttrValue" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "B_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FUNC_FIELD_NUMBER" + mtype: "" + } + member { + name: "F_FIELD_NUMBER" + mtype: "" + } + member { + name: "I_FIELD_NUMBER" + mtype: "" + } + member { + name: "LIST_FIELD_NUMBER" + mtype: "" + } + member { + name: "ListValue" + mtype: "" + } + member { + name: "PLACEHOLDER_FIELD_NUMBER" + mtype: "" + } + member { + name: "SHAPE_FIELD_NUMBER" + mtype: "" + } + member { + name: "S_FIELD_NUMBER" + mtype: "" + } + member { + name: "TENSOR_FIELD_NUMBER" + mtype: "" + } + member { + name: "TYPE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-auto-parallel-options.pbtxt b/tensorflow/tools/api/golden/tensorflow.-auto-parallel-options.pbtxt new file mode 100644 index 00000000000..c8f3e8fb154 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-auto-parallel-options.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.AutoParallelOptions" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ENABLE_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "NUM_REPLICAS_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-conditional-accumulator-base.pbtxt b/tensorflow/tools/api/golden/tensorflow.-conditional-accumulator-base.pbtxt new file mode 100644 index 00000000000..c9a32c16b34 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-conditional-accumulator-base.pbtxt @@ -0,0 +1,29 @@ +path: "tensorflow.ConditionalAccumulatorBase" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "accumulator_ref" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtype\', \'shape\', \'accumulator_ref\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "num_accumulated" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "set_global_step" + argspec: "args=[\'self\', \'new_global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-conditional-accumulator.pbtxt b/tensorflow/tools/api/golden/tensorflow.-conditional-accumulator.pbtxt new file mode 100644 index 00000000000..d23b3bd0cae --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-conditional-accumulator.pbtxt @@ -0,0 +1,38 @@ +path: "tensorflow.ConditionalAccumulator" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "accumulator_ref" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtype\', \'shape\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'conditional_accumulator\'], " + } + member_method { + name: "apply_grad" + argspec: "args=[\'self\', \'grad\', \'local_step\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'None\'], " + } + member_method { + name: "num_accumulated" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "set_global_step" + argspec: "args=[\'self\', \'new_global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "take_grad" + argspec: "args=[\'self\', \'num_required\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-config-proto.-device-count-entry.pbtxt b/tensorflow/tools/api/golden/tensorflow.-config-proto.-device-count-entry.pbtxt new file mode 100644 index 00000000000..29bb3be35cb --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-config-proto.-device-count-entry.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.ConfigProto.DeviceCountEntry" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "KEY_FIELD_NUMBER" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-config-proto.pbtxt b/tensorflow/tools/api/golden/tensorflow.-config-proto.pbtxt new file mode 100644 index 00000000000..805a9bdd4f1 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-config-proto.pbtxt @@ -0,0 +1,132 @@ +path: "tensorflow.ConfigProto" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "ALLOW_SOFT_PLACEMENT_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "DEVICE_COUNT_FIELD_NUMBER" + mtype: "" + } + member { + name: "DEVICE_FILTERS_FIELD_NUMBER" + mtype: "" + } + member { + name: "DeviceCountEntry" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "GPU_OPTIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "GRAPH_OPTIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "INTER_OP_PARALLELISM_THREADS_FIELD_NUMBER" + mtype: "" + } + member { + name: "INTRA_OP_PARALLELISM_THREADS_FIELD_NUMBER" + mtype: "" + } + member { + name: "LOG_DEVICE_PLACEMENT_FIELD_NUMBER" + mtype: "" + } + member { + name: "OPERATION_TIMEOUT_IN_MS_FIELD_NUMBER" + mtype: "" + } + member { + name: "PLACEMENT_PERIOD_FIELD_NUMBER" + mtype: "" + } + member { + name: "RPC_OPTIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "SESSION_INTER_OP_THREAD_POOL_FIELD_NUMBER" + mtype: "" + } + member { + name: "USE_PER_SESSION_THREADS_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-d-type.pbtxt b/tensorflow/tools/api/golden/tensorflow.-d-type.pbtxt new file mode 100644 index 00000000000..0b5b88bba80 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-d-type.pbtxt @@ -0,0 +1,77 @@ +path: "tensorflow.DType" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "as_datatype_enum" + mtype: "" + } + member { + name: "as_numpy_dtype" + mtype: "" + } + member { + name: "base_dtype" + mtype: "" + } + member { + name: "is_bool" + mtype: "" + } + member { + name: "is_complex" + mtype: "" + } + member { + name: "is_floating" + mtype: "" + } + member { + name: "is_integer" + mtype: "" + } + member { + name: "is_numpy_compatible" + mtype: "" + } + member { + name: "is_quantized" + mtype: "" + } + member { + name: "is_unsigned" + mtype: "" + } + member { + name: "limits" + mtype: "" + } + member { + name: "max" + mtype: "" + } + member { + name: "min" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "real_dtype" + mtype: "" + } + member { + name: "size" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'type_enum\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_compatible_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-device-spec.pbtxt b/tensorflow/tools/api/golden/tensorflow.-device-spec.pbtxt new file mode 100644 index 00000000000..92e535c3414 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-device-spec.pbtxt @@ -0,0 +1,37 @@ +path: "tensorflow.DeviceSpec" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "job" + mtype: "" + } + member { + name: "replica" + mtype: "" + } + member { + name: "task" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'job\', \'replica\', \'task\', \'device_type\', \'device_index\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "from_string" + argspec: "args=[\'spec\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "merge_from" + argspec: "args=[\'self\', \'dev\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "parse_from_string" + argspec: "args=[\'self\', \'spec\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "to_string" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-dimension.pbtxt b/tensorflow/tools/api/golden/tensorflow.-dimension.pbtxt new file mode 100644 index 00000000000..a9ab27719b4 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-dimension.pbtxt @@ -0,0 +1,25 @@ +path: "tensorflow.Dimension" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "value" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "assert_is_compatible_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_compatible_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "merge_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-event.pbtxt b/tensorflow/tools/api/golden/tensorflow.-event.pbtxt new file mode 100644 index 00000000000..9bf8c124288 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-event.pbtxt @@ -0,0 +1,112 @@ +path: "tensorflow.Event" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FILE_VERSION_FIELD_NUMBER" + mtype: "" + } + member { + name: "GRAPH_DEF_FIELD_NUMBER" + mtype: "" + } + member { + name: "LOG_MESSAGE_FIELD_NUMBER" + mtype: "" + } + member { + name: "META_GRAPH_DEF_FIELD_NUMBER" + mtype: "" + } + member { + name: "SESSION_LOG_FIELD_NUMBER" + mtype: "" + } + member { + name: "STEP_FIELD_NUMBER" + mtype: "" + } + member { + name: "SUMMARY_FIELD_NUMBER" + mtype: "" + } + member { + name: "TAGGED_RUN_METADATA_FIELD_NUMBER" + mtype: "" + } + member { + name: "WALL_TIME_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-f-i-f-o-queue.pbtxt b/tensorflow/tools/api/golden/tensorflow.-f-i-f-o-queue.pbtxt new file mode 100644 index 00000000000..72cc5324476 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-f-i-f-o-queue.pbtxt @@ -0,0 +1,62 @@ +path: "tensorflow.FIFOQueue" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dtypes" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "names" + mtype: "" + } + member { + name: "queue_ref" + mtype: "" + } + member { + name: "shapes" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'capacity\', \'dtypes\', \'shapes\', \'names\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'fifo_queue\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\', \'cancel_pending_enqueues\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "dequeue" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_many" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_up_to" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue_many" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "from_list" + argspec: "args=[\'index\', \'queues\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-fixed-len-feature.pbtxt b/tensorflow/tools/api/golden/tensorflow.-fixed-len-feature.pbtxt new file mode 100644 index 00000000000..6933814a7b6 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-fixed-len-feature.pbtxt @@ -0,0 +1,27 @@ +path: "tensorflow.FixedLenFeature" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "default_value" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "shape" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-fixed-len-sequence-feature.pbtxt b/tensorflow/tools/api/golden/tensorflow.-fixed-len-sequence-feature.pbtxt new file mode 100644 index 00000000000..c5387879519 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-fixed-len-sequence-feature.pbtxt @@ -0,0 +1,31 @@ +path: "tensorflow.FixedLenSequenceFeature" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "allow_missing" + mtype: "" + } + member { + name: "default_value" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "shape" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-fixed-length-record-reader.pbtxt b/tensorflow/tools/api/golden/tensorflow.-fixed-length-record-reader.pbtxt new file mode 100644 index 00000000000..e7e36e2bb35 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-fixed-length-record-reader.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.FixedLengthRecordReader" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "reader_ref" + mtype: "" + } + member { + name: "supports_serialize" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'record_bytes\', \'header_bytes\', \'footer_bytes\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "num_records_produced" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_work_units_completed" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read" + argspec: "args=[\'self\', \'queue\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_up_to" + argspec: "args=[\'self\', \'queue\', \'num_records\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "restore_state" + argspec: "args=[\'self\', \'state\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_state" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-g-p-u-options.pbtxt b/tensorflow/tools/api/golden/tensorflow.-g-p-u-options.pbtxt new file mode 100644 index 00000000000..48cda623f7c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-g-p-u-options.pbtxt @@ -0,0 +1,104 @@ +path: "tensorflow.GPUOptions" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "ALLOCATOR_TYPE_FIELD_NUMBER" + mtype: "" + } + member { + name: "ALLOW_GROWTH_FIELD_NUMBER" + mtype: "" + } + member { + name: "DEFERRED_DELETION_BYTES_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "PER_PROCESS_GPU_MEMORY_FRACTION_FIELD_NUMBER" + mtype: "" + } + member { + name: "POLLING_ACTIVE_DELAY_USECS_FIELD_NUMBER" + mtype: "" + } + member { + name: "POLLING_INACTIVE_DELAY_MSECS_FIELD_NUMBER" + mtype: "" + } + member { + name: "VISIBLE_DEVICE_LIST_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-graph-def.pbtxt b/tensorflow/tools/api/golden/tensorflow.-graph-def.pbtxt new file mode 100644 index 00000000000..1495e847cb0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-graph-def.pbtxt @@ -0,0 +1,92 @@ +path: "tensorflow.GraphDef" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "LIBRARY_FIELD_NUMBER" + mtype: "" + } + member { + name: "NODE_FIELD_NUMBER" + mtype: "" + } + member { + name: "VERSIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "VERSION_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-graph-keys.pbtxt b/tensorflow/tools/api/golden/tensorflow.-graph-keys.pbtxt new file mode 100644 index 00000000000..ef2cfe3787e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-graph-keys.pbtxt @@ -0,0 +1,136 @@ +path: "tensorflow.GraphKeys" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "ACTIVATIONS" + mtype: "" + } + member { + name: "ASSET_FILEPATHS" + mtype: "" + } + member { + name: "BIASES" + mtype: "" + } + member { + name: "CONCATENATED_VARIABLES" + mtype: "" + } + member { + name: "COND_CONTEXT" + mtype: "" + } + member { + name: "EVAL_STEP" + mtype: "" + } + member { + name: "GLOBAL_STEP" + mtype: "" + } + member { + name: "GLOBAL_VARIABLES" + mtype: "" + } + member { + name: "INIT_OP" + mtype: "" + } + member { + name: "LOCAL_INIT_OP" + mtype: "" + } + member { + name: "LOCAL_RESOURCES" + mtype: "" + } + member { + name: "LOCAL_VARIABLES" + mtype: "" + } + member { + name: "LOSSES" + mtype: "" + } + member { + name: "MODEL_VARIABLES" + mtype: "" + } + member { + name: "MOVING_AVERAGE_VARIABLES" + mtype: "" + } + member { + name: "QUEUE_RUNNERS" + mtype: "" + } + member { + name: "READY_FOR_LOCAL_INIT_OP" + mtype: "" + } + member { + name: "READY_OP" + mtype: "" + } + member { + name: "REGULARIZATION_LOSSES" + mtype: "" + } + member { + name: "RESOURCES" + mtype: "" + } + member { + name: "SAVEABLE_OBJECTS" + mtype: "" + } + member { + name: "SAVERS" + mtype: "" + } + member { + name: "SUMMARIES" + mtype: "" + } + member { + name: "SUMMARY_OP" + mtype: "" + } + member { + name: "TABLE_INITIALIZERS" + mtype: "" + } + member { + name: "TRAINABLE_RESOURCE_VARIABLES" + mtype: "" + } + member { + name: "TRAINABLE_VARIABLES" + mtype: "" + } + member { + name: "TRAIN_OP" + mtype: "" + } + member { + name: "UPDATE_OPS" + mtype: "" + } + member { + name: "VARIABLES" + mtype: "" + } + member { + name: "WEIGHTS" + mtype: "" + } + member { + name: "WHILE_CONTEXT" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-graph-options.pbtxt b/tensorflow/tools/api/golden/tensorflow.-graph-options.pbtxt new file mode 100644 index 00000000000..0844f891cad --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-graph-options.pbtxt @@ -0,0 +1,112 @@ +path: "tensorflow.GraphOptions" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "BUILD_COST_MODEL_AFTER_FIELD_NUMBER" + mtype: "" + } + member { + name: "BUILD_COST_MODEL_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ENABLE_BFLOAT16_SENDRECV_FIELD_NUMBER" + mtype: "" + } + member { + name: "ENABLE_RECV_SCHEDULING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "INFER_SHAPES_FIELD_NUMBER" + mtype: "" + } + member { + name: "OPTIMIZER_OPTIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "PLACE_PRUNED_GRAPH_FIELD_NUMBER" + mtype: "" + } + member { + name: "REWRITE_OPTIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "TIMELINE_STEP_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-graph.pbtxt b/tensorflow/tools/api/golden/tensorflow.-graph.pbtxt new file mode 100644 index 00000000000..566456a255d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-graph.pbtxt @@ -0,0 +1,129 @@ +path: "tensorflow.Graph" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "building_function" + mtype: "" + } + member { + name: "finalized" + mtype: "" + } + member { + name: "graph_def_versions" + mtype: "" + } + member { + name: "seed" + mtype: "" + } + member { + name: "version" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "add_to_collection" + argspec: "args=[\'self\', \'name\', \'value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "add_to_collections" + argspec: "args=[\'self\', \'names\', \'value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_default" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_graph_def" + argspec: "args=[\'self\', \'from_version\', \'add_shapes\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], " + } + member_method { + name: "as_graph_element" + argspec: "args=[\'self\', \'obj\', \'allow_tensor\', \'allow_operation\'], varargs=None, keywords=None, defaults=[\'True\', \'True\'], " + } + member_method { + name: "clear_collection" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "colocate_with" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "container" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "control_dependencies" + argspec: "args=[\'self\', \'control_inputs\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "create_op" + argspec: "args=[\'self\', \'op_type\', \'inputs\', \'dtypes\', \'input_types\', \'name\', \'attrs\', \'op_def\', \'compute_shapes\', \'compute_device\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'True\', \'True\'], " + } + member_method { + name: "device" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "finalize" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_all_collection_keys" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_collection" + argspec: "args=[\'self\', \'name\', \'scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_collection_ref" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_operation_by_name" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_operations" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_tensor_by_name" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "gradient_override_map" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "is_feedable" + argspec: "args=[\'self\', \'tensor\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_fetchable" + argspec: "args=[\'self\', \'tensor_or_op\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "name_scope" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "prevent_feeding" + argspec: "args=[\'self\', \'tensor\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "prevent_fetching" + argspec: "args=[\'self\', \'op\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "unique_name" + argspec: "args=[\'self\', \'name\', \'mark_as_used\'], varargs=None, keywords=None, defaults=[\'True\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-histogram-proto.pbtxt b/tensorflow/tools/api/golden/tensorflow.-histogram-proto.pbtxt new file mode 100644 index 00000000000..2567d2fe602 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-histogram-proto.pbtxt @@ -0,0 +1,104 @@ +path: "tensorflow.HistogramProto" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "BUCKET_FIELD_NUMBER" + mtype: "" + } + member { + name: "BUCKET_LIMIT_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "MAX_FIELD_NUMBER" + mtype: "" + } + member { + name: "MIN_FIELD_NUMBER" + mtype: "" + } + member { + name: "NUM_FIELD_NUMBER" + mtype: "" + } + member { + name: "SUM_FIELD_NUMBER" + mtype: "" + } + member { + name: "SUM_SQUARES_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-identity-reader.pbtxt b/tensorflow/tools/api/golden/tensorflow.-identity-reader.pbtxt new file mode 100644 index 00000000000..2eda320d636 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-identity-reader.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.IdentityReader" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "reader_ref" + mtype: "" + } + member { + name: "supports_serialize" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_records_produced" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_work_units_completed" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read" + argspec: "args=[\'self\', \'queue\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_up_to" + argspec: "args=[\'self\', \'queue\', \'num_records\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "restore_state" + argspec: "args=[\'self\', \'state\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_state" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-indexed-slices.pbtxt b/tensorflow/tools/api/golden/tensorflow.-indexed-slices.pbtxt new file mode 100644 index 00000000000..fee84d85307 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-indexed-slices.pbtxt @@ -0,0 +1,42 @@ +path: "tensorflow.IndexedSlices" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dense_shape" + mtype: "" + } + member { + name: "device" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "graph" + mtype: "" + } + member { + name: "indices" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member { + name: "values" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'values\', \'indices\', \'dense_shape\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-interactive-session.pbtxt b/tensorflow/tools/api/golden/tensorflow.-interactive-session.pbtxt new file mode 100644 index 00000000000..623f4b1a273 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-interactive-session.pbtxt @@ -0,0 +1,43 @@ +path: "tensorflow.InteractiveSession" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "graph" + mtype: "" + } + member { + name: "graph_def" + mtype: "" + } + member { + name: "sess_str" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'target\', \'graph\', \'config\'], varargs=None, keywords=None, defaults=[\'\', \'None\', \'None\'], " + } + member_method { + name: "as_default" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "partial_run" + argspec: "args=[\'self\', \'handle\', \'fetches\', \'feed_dict\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "partial_run_setup" + argspec: "args=[\'self\', \'fetches\', \'feeds\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "run" + argspec: "args=[\'self\', \'fetches\', \'feed_dict\', \'options\', \'run_metadata\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-log-message.pbtxt b/tensorflow/tools/api/golden/tensorflow.-log-message.pbtxt new file mode 100644 index 00000000000..a43c5eb7e30 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-log-message.pbtxt @@ -0,0 +1,112 @@ +path: "tensorflow.LogMessage" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DEBUGGING" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ERROR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FATAL" + mtype: "" + } + member { + name: "INFO" + mtype: "" + } + member { + name: "LEVEL_FIELD_NUMBER" + mtype: "" + } + member { + name: "Level" + mtype: "" + } + member { + name: "MESSAGE_FIELD_NUMBER" + mtype: "" + } + member { + name: "UNKNOWN" + mtype: "" + } + member { + name: "WARN" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-name-attr-list.-attr-entry.pbtxt b/tensorflow/tools/api/golden/tensorflow.-name-attr-list.-attr-entry.pbtxt new file mode 100644 index 00000000000..2750bd780ca --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-name-attr-list.-attr-entry.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.NameAttrList.AttrEntry" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "KEY_FIELD_NUMBER" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-name-attr-list.pbtxt b/tensorflow/tools/api/golden/tensorflow.-name-attr-list.pbtxt new file mode 100644 index 00000000000..d10faf67d02 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-name-attr-list.pbtxt @@ -0,0 +1,88 @@ +path: "tensorflow.NameAttrList" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "ATTR_FIELD_NUMBER" + mtype: "" + } + member { + name: "AttrEntry" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "NAME_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-node-def.-attr-entry.pbtxt b/tensorflow/tools/api/golden/tensorflow.-node-def.-attr-entry.pbtxt new file mode 100644 index 00000000000..b1b62d60f1e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-node-def.-attr-entry.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.NodeDef.AttrEntry" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "KEY_FIELD_NUMBER" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-node-def.pbtxt b/tensorflow/tools/api/golden/tensorflow.-node-def.pbtxt new file mode 100644 index 00000000000..b812b4df2b3 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-node-def.pbtxt @@ -0,0 +1,100 @@ +path: "tensorflow.NodeDef" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "ATTR_FIELD_NUMBER" + mtype: "" + } + member { + name: "AttrEntry" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "DEVICE_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "INPUT_FIELD_NUMBER" + mtype: "" + } + member { + name: "NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "OP_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-op-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.-op-error.pbtxt new file mode 100644 index 00000000000..7e59615534f --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-op-error.pbtxt @@ -0,0 +1,29 @@ +path: "tensorflow.OpError" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\', \'error_code\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-operation.pbtxt b/tensorflow/tools/api/golden/tensorflow.-operation.pbtxt new file mode 100644 index 00000000000..0f43a49ee96 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-operation.pbtxt @@ -0,0 +1,65 @@ +path: "tensorflow.Operation" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "control_inputs" + mtype: "" + } + member { + name: "device" + mtype: "" + } + member { + name: "graph" + mtype: "" + } + member { + name: "inputs" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op_def" + mtype: "" + } + member { + name: "outputs" + mtype: "" + } + member { + name: "traceback" + mtype: "" + } + member { + name: "type" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'g\', \'inputs\', \'output_types\', \'control_inputs\', \'input_types\', \'original_op\', \'op_def\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "colocation_groups" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_attr" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "run" + argspec: "args=[\'self\', \'feed_dict\', \'session\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "values" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-optimizer-options.pbtxt b/tensorflow/tools/api/golden/tensorflow.-optimizer-options.pbtxt new file mode 100644 index 00000000000..5dd1ee47c96 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-optimizer-options.pbtxt @@ -0,0 +1,128 @@ +path: "tensorflow.OptimizerOptions" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DEFAULT" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "DO_COMMON_SUBEXPRESSION_ELIMINATION_FIELD_NUMBER" + mtype: "" + } + member { + name: "DO_CONSTANT_FOLDING_FIELD_NUMBER" + mtype: "" + } + member { + name: "DO_FUNCTION_INLINING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "GLOBAL_JIT_LEVEL_FIELD_NUMBER" + mtype: "" + } + member { + name: "GlobalJitLevel" + mtype: "" + } + member { + name: "L0" + mtype: "" + } + member { + name: "L1" + mtype: "" + } + member { + name: "Level" + mtype: "" + } + member { + name: "OFF" + mtype: "" + } + member { + name: "ON_1" + mtype: "" + } + member { + name: "ON_2" + mtype: "" + } + member { + name: "OPT_LEVEL_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-padding-f-i-f-o-queue.pbtxt b/tensorflow/tools/api/golden/tensorflow.-padding-f-i-f-o-queue.pbtxt new file mode 100644 index 00000000000..1bfe723ce75 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-padding-f-i-f-o-queue.pbtxt @@ -0,0 +1,62 @@ +path: "tensorflow.PaddingFIFOQueue" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dtypes" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "names" + mtype: "" + } + member { + name: "queue_ref" + mtype: "" + } + member { + name: "shapes" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'capacity\', \'dtypes\', \'shapes\', \'names\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'padding_fifo_queue\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\', \'cancel_pending_enqueues\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "dequeue" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_many" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_up_to" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue_many" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "from_list" + argspec: "args=[\'index\', \'queues\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-priority-queue.pbtxt b/tensorflow/tools/api/golden/tensorflow.-priority-queue.pbtxt new file mode 100644 index 00000000000..dbe25f3a5b9 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-priority-queue.pbtxt @@ -0,0 +1,62 @@ +path: "tensorflow.PriorityQueue" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dtypes" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "names" + mtype: "" + } + member { + name: "queue_ref" + mtype: "" + } + member { + name: "shapes" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'capacity\', \'types\', \'shapes\', \'names\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'priority_queue\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\', \'cancel_pending_enqueues\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "dequeue" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_many" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_up_to" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue_many" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "from_list" + argspec: "args=[\'index\', \'queues\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-queue-base.pbtxt b/tensorflow/tools/api/golden/tensorflow.-queue-base.pbtxt new file mode 100644 index 00000000000..9263d73a511 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-queue-base.pbtxt @@ -0,0 +1,61 @@ +path: "tensorflow.QueueBase" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "dtypes" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "names" + mtype: "" + } + member { + name: "queue_ref" + mtype: "" + } + member { + name: "shapes" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtypes\', \'shapes\', \'names\', \'queue_ref\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "close" + argspec: "args=[\'self\', \'cancel_pending_enqueues\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "dequeue" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_many" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_up_to" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue_many" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "from_list" + argspec: "args=[\'index\', \'queues\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-random-shuffle-queue.pbtxt b/tensorflow/tools/api/golden/tensorflow.-random-shuffle-queue.pbtxt new file mode 100644 index 00000000000..ec783ffe5a0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-random-shuffle-queue.pbtxt @@ -0,0 +1,62 @@ +path: "tensorflow.RandomShuffleQueue" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dtypes" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "names" + mtype: "" + } + member { + name: "queue_ref" + mtype: "" + } + member { + name: "shapes" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'capacity\', \'min_after_dequeue\', \'dtypes\', \'shapes\', \'names\', \'seed\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'random_shuffle_queue\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\', \'cancel_pending_enqueues\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "dequeue" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_many" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequeue_up_to" + argspec: "args=[\'self\', \'n\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "enqueue_many" + argspec: "args=[\'self\', \'vals\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "from_list" + argspec: "args=[\'index\', \'queues\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-reader-base.pbtxt b/tensorflow/tools/api/golden/tensorflow.-reader-base.pbtxt new file mode 100644 index 00000000000..f6a3ce76a15 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-reader-base.pbtxt @@ -0,0 +1,45 @@ +path: "tensorflow.ReaderBase" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "reader_ref" + mtype: "" + } + member { + name: "supports_serialize" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'reader_ref\', \'supports_serialize\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "num_records_produced" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_work_units_completed" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read" + argspec: "args=[\'self\', \'queue\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_up_to" + argspec: "args=[\'self\', \'queue\', \'num_records\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "restore_state" + argspec: "args=[\'self\', \'state\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_state" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-register-gradient.pbtxt b/tensorflow/tools/api/golden/tensorflow.-register-gradient.pbtxt new file mode 100644 index 00000000000..4d6e4137d12 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-register-gradient.pbtxt @@ -0,0 +1,9 @@ +path: "tensorflow.RegisterGradient" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'op_type\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-rewriter-config.pbtxt b/tensorflow/tools/api/golden/tensorflow.-rewriter-config.pbtxt new file mode 100644 index 00000000000..34d2e176128 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-rewriter-config.pbtxt @@ -0,0 +1,112 @@ +path: "tensorflow.RewriterConfig" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "AUTO_PARALLEL_FIELD_NUMBER" + mtype: "" + } + member { + name: "CONSTANT_FOLDING_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "DISABLE_MODEL_PRUNING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "MANUAL" + mtype: "" + } + member { + name: "MEMORY_OPTIMIZATION_FIELD_NUMBER" + mtype: "" + } + member { + name: "MemOptType" + mtype: "" + } + member { + name: "NO_MEM_OPT" + mtype: "" + } + member { + name: "OPTIMIZERS_FIELD_NUMBER" + mtype: "" + } + member { + name: "OPTIMIZE_TENSOR_LAYOUT_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-run-metadata.pbtxt b/tensorflow/tools/api/golden/tensorflow.-run-metadata.pbtxt new file mode 100644 index 00000000000..808fa0fa217 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-run-metadata.pbtxt @@ -0,0 +1,88 @@ +path: "tensorflow.RunMetadata" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "COST_GRAPH_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "PARTITION_GRAPHS_FIELD_NUMBER" + mtype: "" + } + member { + name: "STEP_STATS_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-run-options.pbtxt b/tensorflow/tools/api/golden/tensorflow.-run-options.pbtxt new file mode 100644 index 00000000000..5ad6804a78c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-run-options.pbtxt @@ -0,0 +1,116 @@ +path: "tensorflow.RunOptions" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DEBUG_OPTIONS_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FULL_TRACE" + mtype: "" + } + member { + name: "HARDWARE_TRACE" + mtype: "" + } + member { + name: "INTER_OP_THREAD_POOL_FIELD_NUMBER" + mtype: "" + } + member { + name: "NO_TRACE" + mtype: "" + } + member { + name: "OUTPUT_PARTITION_GRAPHS_FIELD_NUMBER" + mtype: "" + } + member { + name: "SOFTWARE_TRACE" + mtype: "" + } + member { + name: "TIMEOUT_IN_MS_FIELD_NUMBER" + mtype: "" + } + member { + name: "TRACE_LEVEL_FIELD_NUMBER" + mtype: "" + } + member { + name: "TraceLevel" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-session-log.pbtxt b/tensorflow/tools/api/golden/tensorflow.-session-log.pbtxt new file mode 100644 index 00000000000..ec66d7f3354 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-session-log.pbtxt @@ -0,0 +1,108 @@ +path: "tensorflow.SessionLog" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CHECKPOINT" + mtype: "" + } + member { + name: "CHECKPOINT_PATH_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "MSG_FIELD_NUMBER" + mtype: "" + } + member { + name: "START" + mtype: "" + } + member { + name: "STATUS_FIELD_NUMBER" + mtype: "" + } + member { + name: "STATUS_UNSPECIFIED" + mtype: "" + } + member { + name: "STOP" + mtype: "" + } + member { + name: "SessionStatus" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-session.pbtxt b/tensorflow/tools/api/golden/tensorflow.-session.pbtxt new file mode 100644 index 00000000000..f5c597548f4 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-session.pbtxt @@ -0,0 +1,47 @@ +path: "tensorflow.Session" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "graph" + mtype: "" + } + member { + name: "graph_def" + mtype: "" + } + member { + name: "sess_str" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'target\', \'graph\', \'config\'], varargs=None, keywords=None, defaults=[\'\', \'None\', \'None\'], " + } + member_method { + name: "as_default" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "partial_run" + argspec: "args=[\'self\', \'handle\', \'fetches\', \'feed_dict\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "partial_run_setup" + argspec: "args=[\'self\', \'fetches\', \'feeds\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'target\', \'containers\', \'config\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "run" + argspec: "args=[\'self\', \'fetches\', \'feed_dict\', \'options\', \'run_metadata\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-sparse-conditional-accumulator.pbtxt b/tensorflow/tools/api/golden/tensorflow.-sparse-conditional-accumulator.pbtxt new file mode 100644 index 00000000000..2260279ad2b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-sparse-conditional-accumulator.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.SparseConditionalAccumulator" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "accumulator_ref" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtype\', \'shape\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'sparse_conditional_accumulator\'], " + } + member_method { + name: "apply_grad" + argspec: "args=[\'self\', \'grad_indices\', \'grad_values\', \'grad_shape\', \'local_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'0\', \'None\'], " + } + member_method { + name: "apply_indexed_slices_grad" + argspec: "args=[\'self\', \'grad\', \'local_step\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'None\'], " + } + member_method { + name: "num_accumulated" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "set_global_step" + argspec: "args=[\'self\', \'new_global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "take_grad" + argspec: "args=[\'self\', \'num_required\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "take_indexed_slices_grad" + argspec: "args=[\'self\', \'num_required\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-sparse-feature.pbtxt b/tensorflow/tools/api/golden/tensorflow.-sparse-feature.pbtxt new file mode 100644 index 00000000000..d875394fb5d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-sparse-feature.pbtxt @@ -0,0 +1,35 @@ +path: "tensorflow.SparseFeature" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "already_sorted" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "index_key" + mtype: "" + } + member { + name: "size" + mtype: "" + } + member { + name: "value_key" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-sparse-tensor-value.pbtxt b/tensorflow/tools/api/golden/tensorflow.-sparse-tensor-value.pbtxt new file mode 100644 index 00000000000..d33fd4d5d7b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-sparse-tensor-value.pbtxt @@ -0,0 +1,26 @@ +path: "tensorflow.SparseTensorValue" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "dense_shape" + mtype: "" + } + member { + name: "indices" + mtype: "" + } + member { + name: "values" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-sparse-tensor.pbtxt b/tensorflow/tools/api/golden/tensorflow.-sparse-tensor.pbtxt new file mode 100644 index 00000000000..eac236d4982 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-sparse-tensor.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.SparseTensor" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dense_shape" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "graph" + mtype: "" + } + member { + name: "indices" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member { + name: "values" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'indices\', \'values\', \'dense_shape\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "eval" + argspec: "args=[\'self\', \'feed_dict\', \'session\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "from_value" + argspec: "args=[\'cls\', \'sparse_tensor_value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_shape" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-summary.-audio.pbtxt b/tensorflow/tools/api/golden/tensorflow.-summary.-audio.pbtxt new file mode 100644 index 00000000000..781010d75e2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-summary.-audio.pbtxt @@ -0,0 +1,96 @@ +path: "tensorflow.Summary.Audio" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CONTENT_TYPE_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ENCODED_AUDIO_STRING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "LENGTH_FRAMES_FIELD_NUMBER" + mtype: "" + } + member { + name: "NUM_CHANNELS_FIELD_NUMBER" + mtype: "" + } + member { + name: "SAMPLE_RATE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-summary.-image.pbtxt b/tensorflow/tools/api/golden/tensorflow.-summary.-image.pbtxt new file mode 100644 index 00000000000..feb9c7ee927 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-summary.-image.pbtxt @@ -0,0 +1,92 @@ +path: "tensorflow.Summary.Image" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "COLORSPACE_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ENCODED_IMAGE_STRING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "HEIGHT_FIELD_NUMBER" + mtype: "" + } + member { + name: "WIDTH_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-summary.-value.pbtxt b/tensorflow/tools/api/golden/tensorflow.-summary.-value.pbtxt new file mode 100644 index 00000000000..d02fb9ecd48 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-summary.-value.pbtxt @@ -0,0 +1,108 @@ +path: "tensorflow.Summary.Value" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "AUDIO_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "HISTO_FIELD_NUMBER" + mtype: "" + } + member { + name: "IMAGE_FIELD_NUMBER" + mtype: "" + } + member { + name: "NODE_NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "OBSOLETE_OLD_STYLE_HISTOGRAM_FIELD_NUMBER" + mtype: "" + } + member { + name: "SIMPLE_VALUE_FIELD_NUMBER" + mtype: "" + } + member { + name: "TAG_FIELD_NUMBER" + mtype: "" + } + member { + name: "TENSOR_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-summary.pbtxt b/tensorflow/tools/api/golden/tensorflow.-summary.pbtxt new file mode 100644 index 00000000000..38de17fa9e5 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-summary.pbtxt @@ -0,0 +1,92 @@ +path: "tensorflow.Summary" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "Audio" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "Image" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member { + name: "Value" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-t-f-record-reader.pbtxt b/tensorflow/tools/api/golden/tensorflow.-t-f-record-reader.pbtxt new file mode 100644 index 00000000000..cdf79373919 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-t-f-record-reader.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.TFRecordReader" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "reader_ref" + mtype: "" + } + member { + name: "supports_serialize" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'name\', \'options\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "num_records_produced" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_work_units_completed" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read" + argspec: "args=[\'self\', \'queue\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_up_to" + argspec: "args=[\'self\', \'queue\', \'num_records\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "restore_state" + argspec: "args=[\'self\', \'state\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_state" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-tensor-array.pbtxt b/tensorflow/tools/api/golden/tensorflow.-tensor-array.pbtxt new file mode 100644 index 00000000000..a0fad4df524 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-tensor-array.pbtxt @@ -0,0 +1,69 @@ +path: "tensorflow.TensorArray" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "dtype" + mtype: "" + } + member { + name: "flow" + mtype: "" + } + member { + name: "handle" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtype\', \'size\', \'dynamic_size\', \'clear_after_read\', \'tensor_array_name\', \'handle\', \'flow\', \'infer_shape\', \'element_shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "concat" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "gather" + argspec: "args=[\'self\', \'indices\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "grad" + argspec: "args=[\'self\', \'source\', \'flow\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "identity" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "read" + argspec: "args=[\'self\', \'index\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "scatter" + argspec: "args=[\'self\', \'indices\', \'value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "size" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "split" + argspec: "args=[\'self\', \'value\', \'lengths\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "stack" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "unstack" + argspec: "args=[\'self\', \'value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "write" + argspec: "args=[\'self\', \'index\', \'value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-tensor-info.pbtxt b/tensorflow/tools/api/golden/tensorflow.-tensor-info.pbtxt new file mode 100644 index 00000000000..87632fb7b9e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-tensor-info.pbtxt @@ -0,0 +1,88 @@ +path: "tensorflow.TensorInfo" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "DTYPE_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "TENSOR_SHAPE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-tensor-shape.pbtxt b/tensorflow/tools/api/golden/tensorflow.-tensor-shape.pbtxt new file mode 100644 index 00000000000..d5b9cb8f5ed --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-tensor-shape.pbtxt @@ -0,0 +1,73 @@ +path: "tensorflow.TensorShape" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "dims" + mtype: "" + } + member { + name: "ndims" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dims\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_list" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_proto" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "assert_has_rank" + argspec: "args=[\'self\', \'rank\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "assert_is_compatible_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "assert_is_fully_defined" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "assert_same_rank" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "concatenate" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_compatible_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_fully_defined" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "merge_with" + argspec: "args=[\'self\', \'other\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "num_elements" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "with_rank" + argspec: "args=[\'self\', \'rank\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "with_rank_at_least" + argspec: "args=[\'self\', \'rank\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "with_rank_at_most" + argspec: "args=[\'self\', \'rank\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-tensor.pbtxt b/tensorflow/tools/api/golden/tensorflow.-tensor.pbtxt new file mode 100644 index 00000000000..38d19bb5374 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-tensor.pbtxt @@ -0,0 +1,58 @@ +path: "tensorflow.Tensor" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "OVERLOADABLE_OPERATORS" + mtype: "" + } + member { + name: "device" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "graph" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member { + name: "shape" + mtype: "" + } + member { + name: "value_index" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'op\', \'value_index\', \'dtype\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "consumers" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "eval" + argspec: "args=[\'self\', \'feed_dict\', \'session\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "get_shape" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_shape" + argspec: "args=[\'self\', \'shape\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-text-line-reader.pbtxt b/tensorflow/tools/api/golden/tensorflow.-text-line-reader.pbtxt new file mode 100644 index 00000000000..e9779f07620 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-text-line-reader.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.TextLineReader" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "reader_ref" + mtype: "" + } + member { + name: "supports_serialize" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'skip_header_lines\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "num_records_produced" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_work_units_completed" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read" + argspec: "args=[\'self\', \'queue\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_up_to" + argspec: "args=[\'self\', \'queue\', \'num_records\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "restore_state" + argspec: "args=[\'self\', \'state\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_state" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-var-len-feature.pbtxt b/tensorflow/tools/api/golden/tensorflow.-var-len-feature.pbtxt new file mode 100644 index 00000000000..54b66f43f8e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-var-len-feature.pbtxt @@ -0,0 +1,19 @@ +path: "tensorflow.VarLenFeature" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "dtype" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-variable-scope.pbtxt b/tensorflow/tools/api/golden/tensorflow.-variable-scope.pbtxt new file mode 100644 index 00000000000..c9b2dfd6772 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-variable-scope.pbtxt @@ -0,0 +1,97 @@ +path: "tensorflow.VariableScope" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "caching_device" + mtype: "" + } + member { + name: "custom_getter" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "initializer" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "original_name_scope" + mtype: "" + } + member { + name: "partitioner" + mtype: "" + } + member { + name: "regularizer" + mtype: "" + } + member { + name: "reuse" + mtype: "" + } + member { + name: "use_resource" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'reuse\', \'name\', \'initializer\', \'regularizer\', \'caching_device\', \'partitioner\', \'custom_getter\', \'name_scope\', \'dtype\', \'use_resource\'], varargs=None, keywords=None, defaults=[\'\', \'None\', \'None\', \'None\', \'None\', \'None\', \'\', \"\", \'None\'], " + } + member_method { + name: "get_collection" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_variable" + argspec: "args=[\'self\', \'var_store\', \'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'reuse\', \'trainable\', \'collections\', \'caching_device\', \'partitioner\', \'validate_shape\', \'use_resource\', \'custom_getter\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "global_variables" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "reuse_variables" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_caching_device" + argspec: "args=[\'self\', \'caching_device\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_custom_getter" + argspec: "args=[\'self\', \'custom_getter\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_dtype" + argspec: "args=[\'self\', \'dtype\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_initializer" + argspec: "args=[\'self\', \'initializer\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_partitioner" + argspec: "args=[\'self\', \'partitioner\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_regularizer" + argspec: "args=[\'self\', \'regularizer\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_use_resource" + argspec: "args=[\'self\', \'use_resource\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "trainable_variables" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-variable.-save-slice-info.pbtxt b/tensorflow/tools/api/golden/tensorflow.-variable.-save-slice-info.pbtxt new file mode 100644 index 00000000000..ac3ccd468b2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-variable.-save-slice-info.pbtxt @@ -0,0 +1,17 @@ +path: "tensorflow.Variable.SaveSliceInfo" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "spec" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'full_name\', \'full_shape\', \'var_offset\', \'var_shape\', \'save_slice_info_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "to_proto" + argspec: "args=[\'self\', \'export_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-variable.pbtxt b/tensorflow/tools/api/golden/tensorflow.-variable.pbtxt new file mode 100644 index 00000000000..d67a2713f7a --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-variable.pbtxt @@ -0,0 +1,101 @@ +path: "tensorflow.Variable" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "SaveSliceInfo" + mtype: "" + } + member { + name: "device" + mtype: "" + } + member { + name: "dtype" + mtype: "" + } + member { + name: "graph" + mtype: "" + } + member { + name: "initial_value" + mtype: "" + } + member { + name: "initializer" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member { + name: "shape" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'initial_value\', \'trainable\', \'collections\', \'validate_shape\', \'caching_device\', \'name\', \'variable_def\', \'dtype\', \'expected_shape\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\', \'True\', \'None\', \'True\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assign" + argspec: "args=[\'self\', \'value\', \'use_locking\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "assign_add" + argspec: "args=[\'self\', \'delta\', \'use_locking\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "assign_sub" + argspec: "args=[\'self\', \'delta\', \'use_locking\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "count_up_to" + argspec: "args=[\'self\', \'limit\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "eval" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "from_proto" + argspec: "args=[\'variable_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_shape" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "initialized_value" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "load" + argspec: "args=[\'self\', \'value\', \'session\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_value" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "scatter_sub" + argspec: "args=[\'self\', \'sparse_delta\', \'use_locking\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "set_shape" + argspec: "args=[\'self\', \'shape\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "to_proto" + argspec: "args=[\'self\', \'export_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "value" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.-whole-file-reader.pbtxt b/tensorflow/tools/api/golden/tensorflow.-whole-file-reader.pbtxt new file mode 100644 index 00000000000..4ac759891c6 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.-whole-file-reader.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.WholeFileReader" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "reader_ref" + mtype: "" + } + member { + name: "supports_serialize" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_records_produced" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "num_work_units_completed" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read" + argspec: "args=[\'self\', \'queue\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_up_to" + argspec: "args=[\'self\', \'queue\', \'num_records\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reset" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "restore_state" + argspec: "args=[\'self\', \'state\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_state" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.app.pbtxt b/tensorflow/tools/api/golden/tensorflow.app.pbtxt new file mode 100644 index 00000000000..85044a89879 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.app.pbtxt @@ -0,0 +1,11 @@ +path: "tensorflow.app" +tf_module { + member { + name: "flags" + mtype: "" + } + member_method { + name: "run" + argspec: "args=[\'main\', \'argv\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.compat.pbtxt b/tensorflow/tools/api/golden/tensorflow.compat.pbtxt new file mode 100644 index 00000000000..ccc60314001 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.compat.pbtxt @@ -0,0 +1,35 @@ +path: "tensorflow.compat" +tf_module { + member { + name: "bytes_or_text_types" + mtype: "" + } + member { + name: "complex_types" + mtype: "" + } + member { + name: "integral_types" + mtype: "" + } + member { + name: "real_types" + mtype: "" + } + member_method { + name: "as_bytes" + argspec: "args=[\'bytes_or_text\', \'encoding\'], varargs=None, keywords=None, defaults=[\'utf-8\'], " + } + member_method { + name: "as_str" + argspec: "args=[\'bytes_or_text\', \'encoding\'], varargs=None, keywords=None, defaults=[\'utf-8\'], " + } + member_method { + name: "as_str_any" + argspec: "args=[\'value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_text" + argspec: "args=[\'bytes_or_text\', \'encoding\'], varargs=None, keywords=None, defaults=[\'utf-8\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.constant_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.constant_initializer.pbtxt new file mode 100644 index 00000000000..d34bfe51479 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.constant_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.constant_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'value\', \'dtype\', \'verify_shape\'], varargs=None, keywords=None, defaults=[\'0\', \"\", \'False\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-aborted-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-aborted-error.pbtxt new file mode 100644 index 00000000000..ea9186b0b9d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-aborted-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.AbortedError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-already-exists-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-already-exists-error.pbtxt new file mode 100644 index 00000000000..4e155081dd2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-already-exists-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.AlreadyExistsError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-cancelled-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-cancelled-error.pbtxt new file mode 100644 index 00000000000..b02a0e023aa --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-cancelled-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.CancelledError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-data-loss-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-data-loss-error.pbtxt new file mode 100644 index 00000000000..c1fa66342a7 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-data-loss-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.DataLossError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-deadline-exceeded-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-deadline-exceeded-error.pbtxt new file mode 100644 index 00000000000..8e037936191 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-deadline-exceeded-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.DeadlineExceededError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-failed-precondition-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-failed-precondition-error.pbtxt new file mode 100644 index 00000000000..384d4b534c6 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-failed-precondition-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.FailedPreconditionError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-internal-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-internal-error.pbtxt new file mode 100644 index 00000000000..ac5c4d7879b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-internal-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.InternalError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-invalid-argument-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-invalid-argument-error.pbtxt new file mode 100644 index 00000000000..161edd4a7c5 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-invalid-argument-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.InvalidArgumentError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-not-found-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-not-found-error.pbtxt new file mode 100644 index 00000000000..1e64730ac6d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-not-found-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.NotFoundError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-op-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-op-error.pbtxt new file mode 100644 index 00000000000..b1f14c0457d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-op-error.pbtxt @@ -0,0 +1,29 @@ +path: "tensorflow.errors.OpError" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\', \'error_code\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-out-of-range-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-out-of-range-error.pbtxt new file mode 100644 index 00000000000..6365e472868 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-out-of-range-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.OutOfRangeError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-permission-denied-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-permission-denied-error.pbtxt new file mode 100644 index 00000000000..dc8a66f9ead --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-permission-denied-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.PermissionDeniedError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-resource-exhausted-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-resource-exhausted-error.pbtxt new file mode 100644 index 00000000000..85bb384b469 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-resource-exhausted-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.ResourceExhaustedError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-unauthenticated-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-unauthenticated-error.pbtxt new file mode 100644 index 00000000000..d57d7ac2f20 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-unauthenticated-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.UnauthenticatedError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-unavailable-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-unavailable-error.pbtxt new file mode 100644 index 00000000000..cc33e6ed8d1 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-unavailable-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.UnavailableError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-unimplemented-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-unimplemented-error.pbtxt new file mode 100644 index 00000000000..b8c2e22dbd7 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-unimplemented-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.UnimplementedError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.-unknown-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.-unknown-error.pbtxt new file mode 100644 index 00000000000..8ffcfae95b8 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.-unknown-error.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.errors.UnknownError" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "error_code" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member { + name: "node_def" + mtype: "" + } + member { + name: "op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'node_def\', \'op\', \'message\', \'error_code\'], varargs=None, keywords=None, defaults=[\'2\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.errors.pbtxt b/tensorflow/tools/api/golden/tensorflow.errors.pbtxt new file mode 100644 index 00000000000..0ad1c19603b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.errors.pbtxt @@ -0,0 +1,151 @@ +path: "tensorflow.errors" +tf_module { + member { + name: "ABORTED" + mtype: "" + } + member { + name: "ALREADY_EXISTS" + mtype: "" + } + member { + name: "AbortedError" + mtype: "" + } + member { + name: "AlreadyExistsError" + mtype: "" + } + member { + name: "CANCELLED" + mtype: "" + } + member { + name: "CancelledError" + mtype: "" + } + member { + name: "DATA_LOSS" + mtype: "" + } + member { + name: "DEADLINE_EXCEEDED" + mtype: "" + } + member { + name: "DataLossError" + mtype: "" + } + member { + name: "DeadlineExceededError" + mtype: "" + } + member { + name: "FAILED_PRECONDITION" + mtype: "" + } + member { + name: "FailedPreconditionError" + mtype: "" + } + member { + name: "INTERNAL" + mtype: "" + } + member { + name: "INVALID_ARGUMENT" + mtype: "" + } + member { + name: "InternalError" + mtype: "" + } + member { + name: "InvalidArgumentError" + mtype: "" + } + member { + name: "NOT_FOUND" + mtype: "" + } + member { + name: "NotFoundError" + mtype: "" + } + member { + name: "OK" + mtype: "" + } + member { + name: "OUT_OF_RANGE" + mtype: "" + } + member { + name: "OpError" + mtype: "" + } + member { + name: "OutOfRangeError" + mtype: "" + } + member { + name: "PERMISSION_DENIED" + mtype: "" + } + member { + name: "PermissionDeniedError" + mtype: "" + } + member { + name: "RESOURCE_EXHAUSTED" + mtype: "" + } + member { + name: "ResourceExhaustedError" + mtype: "" + } + member { + name: "UNAUTHENTICATED" + mtype: "" + } + member { + name: "UNAVAILABLE" + mtype: "" + } + member { + name: "UNIMPLEMENTED" + mtype: "" + } + member { + name: "UNKNOWN" + mtype: "" + } + member { + name: "UnauthenticatedError" + mtype: "" + } + member { + name: "UnavailableError" + mtype: "" + } + member { + name: "UnimplementedError" + mtype: "" + } + member { + name: "UnknownError" + mtype: "" + } + member_method { + name: "error_code_from_exception_type" + argspec: "args=[\'cls\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "exception_type_from_error_code" + argspec: "args=[\'error_code\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "raise_exception_on_not_ok_status" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.-estimator-spec.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.-estimator-spec.pbtxt new file mode 100644 index 00000000000..5dbfe217264 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.-estimator-spec.pbtxt @@ -0,0 +1,47 @@ +path: "tensorflow.estimator.EstimatorSpec" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "eval_metric_ops" + mtype: "" + } + member { + name: "export_outputs" + mtype: "" + } + member { + name: "loss" + mtype: "" + } + member { + name: "predictions" + mtype: "" + } + member { + name: "scaffold" + mtype: "" + } + member { + name: "train_op" + mtype: "" + } + member { + name: "training_chief_hooks" + mtype: "" + } + member { + name: "training_hooks" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.-estimator.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.-estimator.pbtxt new file mode 100644 index 00000000000..7a769fd546c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.-estimator.pbtxt @@ -0,0 +1,37 @@ +path: "tensorflow.estimator.Estimator" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "config" + mtype: "" + } + member { + name: "model_dir" + mtype: "" + } + member { + name: "params" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'model_fn\', \'model_dir\', \'config\', \'params\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "evaluate" + argspec: "args=[\'self\', \'input_fn\', \'steps\', \'hooks\', \'checkpoint_path\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "export_savedmodel" + argspec: "args=[\'self\', \'export_dir_base\', \'serving_input_receiver_fn\', \'assets_extra\', \'as_text\', \'checkpoint_path\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\'], " + } + member_method { + name: "predict" + argspec: "args=[\'self\', \'input_fn\', \'predict_keys\', \'hooks\', \'checkpoint_path\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "train" + argspec: "args=[\'self\', \'input_fn\', \'hooks\', \'steps\', \'max_steps\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.-mode-keys.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.-mode-keys.pbtxt new file mode 100644 index 00000000000..6a1c24fa63f --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.-mode-keys.pbtxt @@ -0,0 +1,20 @@ +path: "tensorflow.estimator.ModeKeys" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "EVAL" + mtype: "" + } + member { + name: "PREDICT" + mtype: "" + } + member { + name: "TRAIN" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt new file mode 100644 index 00000000000..8fd991a317b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.-run-config.pbtxt @@ -0,0 +1,68 @@ +path: "tensorflow.estimator.RunConfig" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "cluster_spec" + mtype: "" + } + member { + name: "evaluation_master" + mtype: "" + } + member { + name: "is_chief" + mtype: "" + } + member { + name: "keep_checkpoint_every_n_hours" + mtype: "" + } + member { + name: "keep_checkpoint_max" + mtype: "" + } + member { + name: "master" + mtype: "" + } + member { + name: "num_ps_replicas" + mtype: "" + } + member { + name: "num_worker_replicas" + mtype: "" + } + member { + name: "save_checkpoints_secs" + mtype: "" + } + member { + name: "save_checkpoints_steps" + mtype: "" + } + member { + name: "save_summary_steps" + mtype: "" + } + member { + name: "session_config" + mtype: "" + } + member { + name: "task_id" + mtype: "" + } + member { + name: "task_type" + mtype: "" + } + member { + name: "tf_random_seed" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.__metaclass__.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.__metaclass__.pbtxt new file mode 100644 index 00000000000..3cf7af8da95 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.__metaclass__.pbtxt @@ -0,0 +1,14 @@ +path: "tensorflow.estimator.export.ClassificationOutput.__metaclass__" +tf_class { + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "mro" + } + member_method { + name: "register" + argspec: "args=[\'cls\', \'subclass\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.pbtxt new file mode 100644 index 00000000000..2df1840c4a4 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-classification-output.pbtxt @@ -0,0 +1,22 @@ +path: "tensorflow.estimator.export.ClassificationOutput" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "classes" + mtype: "" + } + member { + name: "scores" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'scores\', \'classes\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "as_signature_def" + argspec: "args=[\'self\', \'receiver_tensors\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.__metaclass__.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.__metaclass__.pbtxt new file mode 100644 index 00000000000..5d165ccbf91 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.__metaclass__.pbtxt @@ -0,0 +1,14 @@ +path: "tensorflow.estimator.export.ExportOutput.__metaclass__" +tf_class { + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "mro" + } + member_method { + name: "register" + argspec: "args=[\'cls\', \'subclass\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.pbtxt new file mode 100644 index 00000000000..fa62e8ced80 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-export-output.pbtxt @@ -0,0 +1,12 @@ +path: "tensorflow.estimator.export.ExportOutput" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "as_signature_def" + argspec: "args=[\'self\', \'receiver_tensors\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.__metaclass__.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.__metaclass__.pbtxt new file mode 100644 index 00000000000..743495ba98c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.__metaclass__.pbtxt @@ -0,0 +1,14 @@ +path: "tensorflow.estimator.export.PredictOutput.__metaclass__" +tf_class { + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "mro" + } + member_method { + name: "register" + argspec: "args=[\'cls\', \'subclass\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.pbtxt new file mode 100644 index 00000000000..e0160b10ce1 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-predict-output.pbtxt @@ -0,0 +1,18 @@ +path: "tensorflow.estimator.export.PredictOutput" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "outputs" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'outputs\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_signature_def" + argspec: "args=[\'self\', \'receiver_tensors\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.__metaclass__.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.__metaclass__.pbtxt new file mode 100644 index 00000000000..dbf4e3dec85 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.__metaclass__.pbtxt @@ -0,0 +1,14 @@ +path: "tensorflow.estimator.export.RegressionOutput.__metaclass__" +tf_class { + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "mro" + } + member_method { + name: "register" + argspec: "args=[\'cls\', \'subclass\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.pbtxt new file mode 100644 index 00000000000..905f0e05535 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-regression-output.pbtxt @@ -0,0 +1,18 @@ +path: "tensorflow.estimator.export.RegressionOutput" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "value" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_signature_def" + argspec: "args=[\'self\', \'receiver_tensors\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.-serving-input-receiver.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.-serving-input-receiver.pbtxt new file mode 100644 index 00000000000..0d9e0443088 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.-serving-input-receiver.pbtxt @@ -0,0 +1,23 @@ +path: "tensorflow.estimator.export.ServingInputReceiver" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "features" + mtype: "" + } + member { + name: "receiver_tensors" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.export.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.export.pbtxt new file mode 100644 index 00000000000..4d0dddb3bc0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.export.pbtxt @@ -0,0 +1,31 @@ +path: "tensorflow.estimator.export" +tf_module { + member { + name: "ClassificationOutput" + mtype: "" + } + member { + name: "ExportOutput" + mtype: "" + } + member { + name: "PredictOutput" + mtype: "" + } + member { + name: "RegressionOutput" + mtype: "" + } + member { + name: "ServingInputReceiver" + mtype: "" + } + member_method { + name: "build_parsing_serving_input_receiver_fn" + argspec: "args=[\'feature_spec\', \'default_batch_size\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "build_raw_serving_input_receiver_fn" + argspec: "args=[\'features\', \'default_batch_size\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.inputs.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.inputs.pbtxt new file mode 100644 index 00000000000..b318fea1f82 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.inputs.pbtxt @@ -0,0 +1,11 @@ +path: "tensorflow.estimator.inputs" +tf_module { + member_method { + name: "numpy_input_fn" + argspec: "args=[\'x\', \'y\', \'batch_size\', \'num_epochs\', \'shuffle\', \'queue_capacity\', \'num_threads\'], varargs=None, keywords=None, defaults=[\'None\', \'128\', \'1\', \'None\', \'1000\', \'1\'], " + } + member_method { + name: "pandas_input_fn" + argspec: "args=[\'x\', \'y\', \'batch_size\', \'num_epochs\', \'shuffle\', \'queue_capacity\', \'num_threads\', \'target_column\'], varargs=None, keywords=None, defaults=[\'None\', \'128\', \'1\', \'None\', \'1000\', \'1\', \'target\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.estimator.pbtxt b/tensorflow/tools/api/golden/tensorflow.estimator.pbtxt new file mode 100644 index 00000000000..0d5dc73271d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.estimator.pbtxt @@ -0,0 +1,27 @@ +path: "tensorflow.estimator" +tf_module { + member { + name: "Estimator" + mtype: "" + } + member { + name: "EstimatorSpec" + mtype: "" + } + member { + name: "ModeKeys" + mtype: "" + } + member { + name: "RunConfig" + mtype: "" + } + member { + name: "export" + mtype: "" + } + member { + name: "inputs" + mtype: "" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.gfile.-fast-g-file.pbtxt b/tensorflow/tools/api/golden/tensorflow.gfile.-fast-g-file.pbtxt new file mode 100644 index 00000000000..41497dc8699 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.gfile.-fast-g-file.pbtxt @@ -0,0 +1,58 @@ +path: "tensorflow.gfile.FastGFile" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "mode" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'name\', \'mode\'], varargs=None, keywords=None, defaults=[\'r\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "flush" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "next" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "read" + argspec: "args=[\'self\', \'n\'], varargs=None, keywords=None, defaults=[\'-1\'], " + } + member_method { + name: "readline" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "readlines" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "seek" + argspec: "args=[\'self\', \'position\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "tell" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "write" + argspec: "args=[\'self\', \'file_content\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.gfile.-g-file.pbtxt b/tensorflow/tools/api/golden/tensorflow.gfile.-g-file.pbtxt new file mode 100644 index 00000000000..bab0f279b24 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.gfile.-g-file.pbtxt @@ -0,0 +1,58 @@ +path: "tensorflow.gfile.GFile" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "mode" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'name\', \'mode\'], varargs=None, keywords=None, defaults=[\'r\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "flush" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "next" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "read" + argspec: "args=[\'self\', \'n\'], varargs=None, keywords=None, defaults=[\'-1\'], " + } + member_method { + name: "readline" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "readlines" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "seek" + argspec: "args=[\'self\', \'position\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "tell" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "write" + argspec: "args=[\'self\', \'file_content\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.gfile.-open.pbtxt b/tensorflow/tools/api/golden/tensorflow.gfile.-open.pbtxt new file mode 100644 index 00000000000..86e577c19a8 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.gfile.-open.pbtxt @@ -0,0 +1,58 @@ +path: "tensorflow.gfile.Open" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "mode" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'name\', \'mode\'], varargs=None, keywords=None, defaults=[\'r\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "flush" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "next" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "read" + argspec: "args=[\'self\', \'n\'], varargs=None, keywords=None, defaults=[\'-1\'], " + } + member_method { + name: "readline" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "readlines" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "seek" + argspec: "args=[\'self\', \'position\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "size" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "tell" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "write" + argspec: "args=[\'self\', \'file_content\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.gfile.pbtxt b/tensorflow/tools/api/golden/tensorflow.gfile.pbtxt new file mode 100644 index 00000000000..65b55a8b7c4 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.gfile.pbtxt @@ -0,0 +1,63 @@ +path: "tensorflow.gfile" +tf_module { + member { + name: "FastGFile" + mtype: "" + } + member { + name: "GFile" + mtype: "" + } + member { + name: "Open" + mtype: "" + } + member_method { + name: "Copy" + argspec: "args=[\'oldpath\', \'newpath\', \'overwrite\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "DeleteRecursively" + argspec: "args=[\'dirname\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Exists" + argspec: "args=[\'filename\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Glob" + argspec: "args=[\'filename\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "IsDirectory" + argspec: "args=[\'dirname\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "ListDirectory" + argspec: "args=[\'dirname\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "MakeDirs" + argspec: "args=[\'dirname\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "MkDir" + argspec: "args=[\'dirname\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Remove" + argspec: "args=[\'filename\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Rename" + argspec: "args=[\'oldname\', \'newname\', \'overwrite\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "Stat" + argspec: "args=[\'filename\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Walk" + argspec: "args=[\'top\', \'in_order\'], varargs=None, keywords=None, defaults=[\'True\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.graph_util.pbtxt b/tensorflow/tools/api/golden/tensorflow.graph_util.pbtxt new file mode 100644 index 00000000000..76a2df757e7 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.graph_util.pbtxt @@ -0,0 +1,23 @@ +path: "tensorflow.graph_util" +tf_module { + member_method { + name: "convert_variables_to_constants" + argspec: "args=[\'sess\', \'input_graph_def\', \'output_node_names\', \'variable_names_whitelist\', \'variable_names_blacklist\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "extract_sub_graph" + argspec: "args=[\'graph_def\', \'dest_nodes\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "must_run_on_cpu" + argspec: "args=[\'node\', \'pin_variables_on_cpu\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "remove_training_nodes" + argspec: "args=[\'input_graph\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "tensor_shape_from_node_def_name" + argspec: "args=[\'graph\', \'input_name\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.image.-resize-method.pbtxt b/tensorflow/tools/api/golden/tensorflow.image.-resize-method.pbtxt new file mode 100644 index 00000000000..dbc360b13ee --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.image.-resize-method.pbtxt @@ -0,0 +1,24 @@ +path: "tensorflow.image.ResizeMethod" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "AREA" + mtype: "" + } + member { + name: "BICUBIC" + mtype: "" + } + member { + name: "BILINEAR" + mtype: "" + } + member { + name: "NEAREST_NEIGHBOR" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.image.pbtxt b/tensorflow/tools/api/golden/tensorflow.image.pbtxt new file mode 100644 index 00000000000..6002f36bacb --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.image.pbtxt @@ -0,0 +1,175 @@ +path: "tensorflow.image" +tf_module { + member { + name: "ResizeMethod" + mtype: "" + } + member_method { + name: "adjust_brightness" + argspec: "args=[\'image\', \'delta\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "adjust_contrast" + argspec: "args=[\'images\', \'contrast_factor\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "adjust_gamma" + argspec: "args=[\'image\', \'gamma\', \'gain\'], varargs=None, keywords=None, defaults=[\'1\', \'1\'], " + } + member_method { + name: "adjust_hue" + argspec: "args=[\'image\', \'delta\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "adjust_saturation" + argspec: "args=[\'image\', \'saturation_factor\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "central_crop" + argspec: "args=[\'image\', \'central_fraction\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "convert_image_dtype" + argspec: "args=[\'image\', \'dtype\', \'saturate\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "crop_and_resize" + argspec: "args=[\'image\', \'boxes\', \'box_ind\', \'crop_size\', \'method\', \'extrapolation_value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "crop_to_bounding_box" + argspec: "args=[\'image\', \'offset_height\', \'offset_width\', \'target_height\', \'target_width\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "decode_gif" + argspec: "args=[\'contents\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "decode_image" + argspec: "args=[\'contents\', \'channels\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "decode_jpeg" + argspec: "args=[\'contents\', \'channels\', \'ratio\', \'fancy_upscaling\', \'try_recover_truncated\', \'acceptable_fraction\', \'dct_method\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "decode_png" + argspec: "args=[\'contents\', \'channels\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "draw_bounding_boxes" + argspec: "args=[\'images\', \'boxes\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "encode_jpeg" + argspec: "args=[\'image\', \'format\', \'quality\', \'progressive\', \'optimize_size\', \'chroma_downsampling\', \'density_unit\', \'x_density\', \'y_density\', \'xmp_metadata\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "encode_png" + argspec: "args=[\'image\', \'compression\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "extract_glimpse" + argspec: "args=[\'input\', \'size\', \'offsets\', \'centered\', \'normalized\', \'uniform_noise\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "flip_left_right" + argspec: "args=[\'image\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "flip_up_down" + argspec: "args=[\'image\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "grayscale_to_rgb" + argspec: "args=[\'images\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "hsv_to_rgb" + argspec: "args=[\'images\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "non_max_suppression" + argspec: "args=[\'boxes\', \'scores\', \'max_output_size\', \'iou_threshold\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "pad_to_bounding_box" + argspec: "args=[\'image\', \'offset_height\', \'offset_width\', \'target_height\', \'target_width\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "per_image_standardization" + argspec: "args=[\'image\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "random_brightness" + argspec: "args=[\'image\', \'max_delta\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "random_contrast" + argspec: "args=[\'image\', \'lower\', \'upper\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "random_flip_left_right" + argspec: "args=[\'image\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "random_flip_up_down" + argspec: "args=[\'image\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "random_hue" + argspec: "args=[\'image\', \'max_delta\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "random_saturation" + argspec: "args=[\'image\', \'lower\', \'upper\', \'seed\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "resize_area" + argspec: "args=[\'images\', \'size\', \'align_corners\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "resize_bicubic" + argspec: "args=[\'images\', \'size\', \'align_corners\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "resize_bilinear" + argspec: "args=[\'images\', \'size\', \'align_corners\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "resize_image_with_crop_or_pad" + argspec: "args=[\'image\', \'target_height\', \'target_width\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "resize_images" + argspec: "args=[\'images\', \'size\', \'method\', \'align_corners\'], varargs=None, keywords=None, defaults=[\'0\', \'False\'], " + } + member_method { + name: "resize_nearest_neighbor" + argspec: "args=[\'images\', \'size\', \'align_corners\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "rgb_to_grayscale" + argspec: "args=[\'images\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "rgb_to_hsv" + argspec: "args=[\'images\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "rot90" + argspec: "args=[\'image\', \'k\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'None\'], " + } + member_method { + name: "sample_distorted_bounding_box" + argspec: "args=[\'image_size\', \'bounding_boxes\', \'seed\', \'seed2\', \'min_object_covered\', \'aspect_ratio_range\', \'area_range\', \'max_attempts\', \'use_image_if_no_bounding_boxes\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "total_variation" + argspec: "args=[\'images\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "transpose_image" + argspec: "args=[\'image\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.layers.pbtxt b/tensorflow/tools/api/golden/tensorflow.layers.pbtxt new file mode 100644 index 00000000000..6ca38e259bf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.layers.pbtxt @@ -0,0 +1,59 @@ +path: "tensorflow.layers" +tf_module { + member_method { + name: "average_pooling1d" + argspec: "args=[\'inputs\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'valid\', \'channels_last\', \'None\'], " + } + member_method { + name: "average_pooling2d" + argspec: "args=[\'inputs\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'valid\', \'channels_last\', \'None\'], " + } + member_method { + name: "average_pooling3d" + argspec: "args=[\'inputs\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'valid\', \'channels_last\', \'None\'], " + } + member_method { + name: "batch_normalization" + argspec: "args=[\'inputs\', \'axis\', \'momentum\', \'epsilon\', \'center\', \'scale\', \'beta_initializer\', \'gamma_initializer\', \'moving_mean_initializer\', \'moving_variance_initializer\', \'beta_regularizer\', \'gamma_regularizer\', \'training\', \'trainable\', \'name\', \'reuse\', \'renorm\', \'renorm_clipping\', \'renorm_momentum\'], varargs=None, keywords=None, defaults=[\'-1\', \'0.99\', \'0.001\', \'True\', \'True\', \'\', \'\', \'\', \'\', \'None\', \'None\', \'False\', \'True\', \'None\', \'None\', \'False\', \'None\', \'0.99\'], " + } + member_method { + name: "conv1d" + argspec: "args=[\'inputs\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'trainable\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\'1\', \'valid\', \'channels_last\', \'1\', \'None\', \'True\', \'None\', \'\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "conv2d" + argspec: "args=[\'inputs\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'trainable\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'channels_last\', \'(1, 1)\', \'None\', \'True\', \'None\', \'\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "conv2d_transpose" + argspec: "args=[\'inputs\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'trainable\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'channels_last\', \'None\', \'True\', \'None\', \'\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "conv3d" + argspec: "args=[\'inputs\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'trainable\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\'(1, 1, 1)\', \'valid\', \'channels_last\', \'(1, 1, 1)\', \'None\', \'True\', \'None\', \'\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "dense" + argspec: "args=[\'inputs\', \'units\', \'activation\', \'use_bias\', \'kernel_initializer\', \'bias_initializer\', \'kernel_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'trainable\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\'None\', \'True\', \'None\', \'\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "dropout" + argspec: "args=[\'inputs\', \'rate\', \'noise_shape\', \'seed\', \'training\', \'name\'], varargs=None, keywords=None, defaults=[\'0.5\', \'None\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "max_pooling1d" + argspec: "args=[\'inputs\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'valid\', \'channels_last\', \'None\'], " + } + member_method { + name: "max_pooling2d" + argspec: "args=[\'inputs\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'valid\', \'channels_last\', \'None\'], " + } + member_method { + name: "max_pooling3d" + argspec: "args=[\'inputs\', \'pool_size\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'valid\', \'channels_last\', \'None\'], " + } + member_method { + name: "separable_conv2d" + argspec: "args=[\'inputs\', \'filters\', \'kernel_size\', \'strides\', \'padding\', \'data_format\', \'dilation_rate\', \'depth_multiplier\', \'activation\', \'use_bias\', \'depthwise_initializer\', \'pointwise_initializer\', \'bias_initializer\', \'depthwise_regularizer\', \'pointwise_regularizer\', \'bias_regularizer\', \'activity_regularizer\', \'trainable\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\'(1, 1)\', \'valid\', \'channels_last\', \'(1, 1)\', \'1\', \'None\', \'True\', \'None\', \'None\', \'\', \'None\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.logging.pbtxt b/tensorflow/tools/api/golden/tensorflow.logging.pbtxt new file mode 100644 index 00000000000..85bb15455da --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.logging.pbtxt @@ -0,0 +1,83 @@ +path: "tensorflow.logging" +tf_module { + member { + name: "DEBUG" + mtype: "" + } + member { + name: "ERROR" + mtype: "" + } + member { + name: "FATAL" + mtype: "" + } + member { + name: "INFO" + mtype: "" + } + member { + name: "WARN" + mtype: "" + } + member_method { + name: "TaskLevelStatusMessage" + argspec: "args=[\'msg\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "debug" + argspec: "args=[\'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "error" + argspec: "args=[\'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "fatal" + argspec: "args=[\'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "flush" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_verbosity" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "info" + argspec: "args=[\'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "log" + argspec: "args=[\'level\', \'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "log_every_n" + argspec: "args=[\'level\', \'msg\', \'n\'], varargs=args, keywords=None, defaults=None" + } + member_method { + name: "log_first_n" + argspec: "args=[\'level\', \'msg\', \'n\'], varargs=args, keywords=None, defaults=None" + } + member_method { + name: "log_if" + argspec: "args=[\'level\', \'msg\', \'condition\'], varargs=args, keywords=None, defaults=None" + } + member_method { + name: "set_verbosity" + argspec: "args=[\'v\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "vlog" + argspec: "args=[\'level\', \'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "warn" + argspec: "args=[\'msg\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "warning" + argspec: "args=[\'msg\'], varargs=args, keywords=kwargs, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.losses.pbtxt b/tensorflow/tools/api/golden/tensorflow.losses.pbtxt new file mode 100644 index 00000000000..5477ac58174 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.losses.pbtxt @@ -0,0 +1,63 @@ +path: "tensorflow.losses" +tf_module { + member_method { + name: "absolute_difference" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \'losses\'], " + } + member_method { + name: "add_loss" + argspec: "args=[\'loss\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'losses\'], " + } + member_method { + name: "compute_weighted_loss" + argspec: "args=[\'losses\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \'losses\'], " + } + member_method { + name: "cosine_distance" + argspec: "args=[\'labels\', \'predictions\', \'dim\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'None\', \'1.0\', \'None\', \'losses\'], " + } + member_method { + name: "get_losses" + argspec: "args=[\'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'None\', \'losses\'], " + } + member_method { + name: "get_regularization_loss" + argspec: "args=[\'scope\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'total_regularization_loss\'], " + } + member_method { + name: "get_regularization_losses" + argspec: "args=[\'scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_total_loss" + argspec: "args=[\'add_regularization_losses\', \'name\'], varargs=None, keywords=None, defaults=[\'True\', \'total_loss\'], " + } + member_method { + name: "hinge_loss" + argspec: "args=[\'labels\', \'logits\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \'losses\'], " + } + member_method { + name: "log_loss" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'epsilon\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'1e-07\', \'None\', \'losses\'], " + } + member_method { + name: "mean_pairwise_squared_error" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \'losses\'], " + } + member_method { + name: "mean_squared_error" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \'losses\'], " + } + member_method { + name: "sigmoid_cross_entropy" + argspec: "args=[\'multi_class_labels\', \'logits\', \'weights\', \'label_smoothing\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'0\', \'None\', \'losses\'], " + } + member_method { + name: "softmax_cross_entropy" + argspec: "args=[\'onehot_labels\', \'logits\', \'weights\', \'label_smoothing\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'0\', \'None\', \'losses\'], " + } + member_method { + name: "sparse_softmax_cross_entropy" + argspec: "args=[\'labels\', \'logits\', \'weights\', \'scope\', \'loss_collection\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \'losses\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.metrics.pbtxt b/tensorflow/tools/api/golden/tensorflow.metrics.pbtxt new file mode 100644 index 00000000000..262d11c38e1 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.metrics.pbtxt @@ -0,0 +1,99 @@ +path: "tensorflow.metrics" +tf_module { + member_method { + name: "accuracy" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "auc" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'num_thresholds\', \'metrics_collections\', \'updates_collections\', \'curve\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'200\', \'None\', \'None\', \'ROC\', \'None\'], " + } + member_method { + name: "false_negatives" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "false_positives" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean" + argspec: "args=[\'values\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_absolute_error" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_cosine_distance" + argspec: "args=[\'labels\', \'predictions\', \'dim\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_iou" + argspec: "args=[\'labels\', \'predictions\', \'num_classes\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_per_class_accuracy" + argspec: "args=[\'labels\', \'predictions\', \'num_classes\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_relative_error" + argspec: "args=[\'labels\', \'predictions\', \'normalizer\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_squared_error" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "mean_tensor" + argspec: "args=[\'values\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "percentage_below" + argspec: "args=[\'values\', \'threshold\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "precision" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "precision_at_thresholds" + argspec: "args=[\'labels\', \'predictions\', \'thresholds\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "recall" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "recall_at_k" + argspec: "args=[\'labels\', \'predictions\', \'k\', \'class_id\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "recall_at_thresholds" + argspec: "args=[\'labels\', \'predictions\', \'thresholds\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "root_mean_squared_error" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "sensitivity_at_specificity" + argspec: "args=[\'labels\', \'predictions\', \'specificity\', \'weights\', \'num_thresholds\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'200\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "sparse_average_precision_at_k" + argspec: "args=[\'labels\', \'predictions\', \'k\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "sparse_precision_at_k" + argspec: "args=[\'labels\', \'predictions\', \'k\', \'class_id\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "specificity_at_sensitivity" + argspec: "args=[\'labels\', \'predictions\', \'sensitivity\', \'weights\', \'num_thresholds\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'200\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "true_positives" + argspec: "args=[\'labels\', \'predictions\', \'weights\', \'metrics_collections\', \'updates_collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.nn.pbtxt b/tensorflow/tools/api/golden/tensorflow.nn.pbtxt new file mode 100644 index 00000000000..192ceac2ddf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.nn.pbtxt @@ -0,0 +1,323 @@ +path: "tensorflow.nn" +tf_module { + member_method { + name: "all_candidate_sampler" + argspec: "args=[\'true_classes\', \'num_true\', \'num_sampled\', \'unique\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "atrous_conv2d" + argspec: "args=[\'value\', \'filters\', \'rate\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "atrous_conv2d_transpose" + argspec: "args=[\'value\', \'filters\', \'output_shape\', \'rate\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "avg_pool" + argspec: "args=[\'value\', \'ksize\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'NHWC\', \'None\'], " + } + member_method { + name: "avg_pool3d" + argspec: "args=[\'input\', \'ksize\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "batch_norm_with_global_normalization" + argspec: "args=[\'t\', \'m\', \'v\', \'beta\', \'gamma\', \'variance_epsilon\', \'scale_after_normalization\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "batch_normalization" + argspec: "args=[\'x\', \'mean\', \'variance\', \'offset\', \'scale\', \'variance_epsilon\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "bias_add" + argspec: "args=[\'value\', \'bias\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "bidirectional_dynamic_rnn" + argspec: "args=[\'cell_fw\', \'cell_bw\', \'inputs\', \'sequence_length\', \'initial_state_fw\', \'initial_state_bw\', \'dtype\', \'parallel_iterations\', \'swap_memory\', \'time_major\', \'scope\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "compute_accidental_hits" + argspec: "args=[\'true_classes\', \'sampled_candidates\', \'num_true\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "conv1d" + argspec: "args=[\'value\', \'filters\', \'stride\', \'padding\', \'use_cudnn_on_gpu\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "conv2d" + argspec: "args=[\'input\', \'filter\', \'strides\', \'padding\', \'use_cudnn_on_gpu\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "conv2d_backprop_filter" + argspec: "args=[\'input\', \'filter_sizes\', \'out_backprop\', \'strides\', \'padding\', \'use_cudnn_on_gpu\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "conv2d_backprop_input" + argspec: "args=[\'input_sizes\', \'filter\', \'out_backprop\', \'strides\', \'padding\', \'use_cudnn_on_gpu\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "conv2d_transpose" + argspec: "args=[\'value\', \'filter\', \'output_shape\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'SAME\', \'NHWC\', \'None\'], " + } + member_method { + name: "conv3d" + argspec: "args=[\'input\', \'filter\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "conv3d_backprop_filter_v2" + argspec: "args=[\'input\', \'filter_sizes\', \'out_backprop\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "conv3d_transpose" + argspec: "args=[\'value\', \'filter\', \'output_shape\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'SAME\', \'None\', \'None\'], " + } + member_method { + name: "convolution" + argspec: "args=[\'input\', \'filter\', \'padding\', \'strides\', \'dilation_rate\', \'name\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "crelu" + argspec: "args=[\'features\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ctc_beam_search_decoder" + argspec: "args=[\'inputs\', \'sequence_length\', \'beam_width\', \'top_paths\', \'merge_repeated\'], varargs=None, keywords=None, defaults=[\'100\', \'1\', \'True\'], " + } + member_method { + name: "ctc_greedy_decoder" + argspec: "args=[\'inputs\', \'sequence_length\', \'merge_repeated\'], varargs=None, keywords=None, defaults=[\'True\'], " + } + member_method { + name: "ctc_loss" + argspec: "args=[\'labels\', \'inputs\', \'sequence_length\', \'preprocess_collapse_repeated\', \'ctc_merge_repeated\', \'time_major\'], varargs=None, keywords=None, defaults=[\'False\', \'True\', \'True\'], " + } + member_method { + name: "depthwise_conv2d" + argspec: "args=[\'input\', \'filter\', \'strides\', \'padding\', \'rate\', \'name\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "depthwise_conv2d_native" + argspec: "args=[\'input\', \'filter\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "depthwise_conv2d_native_backprop_filter" + argspec: "args=[\'input\', \'filter_sizes\', \'out_backprop\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "depthwise_conv2d_native_backprop_input" + argspec: "args=[\'input_sizes\', \'filter\', \'out_backprop\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "dilation2d" + argspec: "args=[\'input\', \'filter\', \'strides\', \'rates\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dropout" + argspec: "args=[\'x\', \'keep_prob\', \'noise_shape\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "dynamic_rnn" + argspec: "args=[\'cell\', \'inputs\', \'sequence_length\', \'initial_state\', \'dtype\', \'parallel_iterations\', \'swap_memory\', \'time_major\', \'scope\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "elu" + argspec: "args=[\'features\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "embedding_lookup" + argspec: "args=[\'params\', \'ids\', \'partition_strategy\', \'name\', \'validate_indices\', \'max_norm\'], varargs=None, keywords=None, defaults=[\'mod\', \'None\', \'True\', \'None\'], " + } + member_method { + name: "embedding_lookup_sparse" + argspec: "args=[\'params\', \'sp_ids\', \'sp_weights\', \'partition_strategy\', \'name\', \'combiner\', \'max_norm\'], varargs=None, keywords=None, defaults=[\'mod\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "erosion2d" + argspec: "args=[\'value\', \'kernel\', \'strides\', \'rates\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fixed_unigram_candidate_sampler" + argspec: "args=[\'true_classes\', \'num_true\', \'num_sampled\', \'unique\', \'range_max\', \'vocab_file\', \'distortion\', \'num_reserved_ids\', \'num_shards\', \'shard\', \'unigrams\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'\', \'1.0\', \'0\', \'1\', \'0\', \'()\', \'None\', \'None\'], " + } + member_method { + name: "fractional_avg_pool" + argspec: "args=[\'value\', \'pooling_ratio\', \'pseudo_random\', \'overlapping\', \'deterministic\', \'seed\', \'seed2\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "fractional_max_pool" + argspec: "args=[\'value\', \'pooling_ratio\', \'pseudo_random\', \'overlapping\', \'deterministic\', \'seed\', \'seed2\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "fused_batch_norm" + argspec: "args=[\'x\', \'scale\', \'offset\', \'mean\', \'variance\', \'epsilon\', \'data_format\', \'is_training\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'0.001\', \'NHWC\', \'True\', \'None\'], " + } + member_method { + name: "in_top_k" + argspec: "args=[\'predictions\', \'targets\', \'k\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "l2_loss" + argspec: "args=[\'t\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "l2_normalize" + argspec: "args=[\'x\', \'dim\', \'epsilon\', \'name\'], varargs=None, keywords=None, defaults=[\'1e-12\', \'None\'], " + } + member_method { + name: "learned_unigram_candidate_sampler" + argspec: "args=[\'true_classes\', \'num_true\', \'num_sampled\', \'unique\', \'range_max\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "local_response_normalization" + argspec: "args=[\'input\', \'depth_radius\', \'bias\', \'alpha\', \'beta\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "log_poisson_loss" + argspec: "args=[\'targets\', \'log_input\', \'compute_full_loss\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "log_softmax" + argspec: "args=[\'logits\', \'dim\', \'name\'], varargs=None, keywords=None, defaults=[\'-1\', \'None\'], " + } + member_method { + name: "log_uniform_candidate_sampler" + argspec: "args=[\'true_classes\', \'num_true\', \'num_sampled\', \'unique\', \'range_max\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "lrn" + argspec: "args=[\'input\', \'depth_radius\', \'bias\', \'alpha\', \'beta\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "max_pool" + argspec: "args=[\'value\', \'ksize\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'NHWC\', \'None\'], " + } + member_method { + name: "max_pool3d" + argspec: "args=[\'input\', \'ksize\', \'strides\', \'padding\', \'data_format\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "max_pool_with_argmax" + argspec: "args=[\'input\', \'ksize\', \'strides\', \'padding\', \'Targmax\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "moments" + argspec: "args=[\'x\', \'axes\', \'shift\', \'name\', \'keep_dims\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'False\'], " + } + member_method { + name: "nce_loss" + argspec: "args=[\'weights\', \'biases\', \'labels\', \'inputs\', \'num_sampled\', \'num_classes\', \'num_true\', \'sampled_values\', \'remove_accidental_hits\', \'partition_strategy\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'None\', \'False\', \'mod\', \'nce_loss\'], " + } + member_method { + name: "normalize_moments" + argspec: "args=[\'counts\', \'mean_ss\', \'variance_ss\', \'shift\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "pool" + argspec: "args=[\'input\', \'window_shape\', \'pooling_type\', \'padding\', \'dilation_rate\', \'strides\', \'name\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "quantized_avg_pool" + argspec: "args=[\'input\', \'min_input\', \'max_input\', \'ksize\', \'strides\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "quantized_conv2d" + argspec: "args=[\'input\', \'filter\', \'min_input\', \'max_input\', \'min_filter\', \'max_filter\', \'strides\', \'padding\', \'out_type\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "quantized_max_pool" + argspec: "args=[\'input\', \'min_input\', \'max_input\', \'ksize\', \'strides\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "quantized_relu_x" + argspec: "args=[\'features\', \'max_value\', \'min_features\', \'max_features\', \'out_type\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "raw_rnn" + argspec: "args=[\'cell\', \'loop_fn\', \'parallel_iterations\', \'swap_memory\', \'scope\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\'], " + } + member_method { + name: "relu" + argspec: "args=[\'features\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "relu6" + argspec: "args=[\'features\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "relu_layer" + argspec: "args=[\'x\', \'weights\', \'biases\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sampled_softmax_loss" + argspec: "args=[\'weights\', \'biases\', \'labels\', \'inputs\', \'num_sampled\', \'num_classes\', \'num_true\', \'sampled_values\', \'remove_accidental_hits\', \'partition_strategy\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'None\', \'True\', \'mod\', \'sampled_softmax_loss\'], " + } + member_method { + name: "separable_conv2d" + argspec: "args=[\'input\', \'depthwise_filter\', \'pointwise_filter\', \'strides\', \'padding\', \'rate\', \'name\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "sigmoid" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sigmoid_cross_entropy_with_logits" + argspec: "args=[\'_sentinel\', \'labels\', \'logits\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "softmax" + argspec: "args=[\'logits\', \'dim\', \'name\'], varargs=None, keywords=None, defaults=[\'-1\', \'None\'], " + } + member_method { + name: "softmax_cross_entropy_with_logits" + argspec: "args=[\'_sentinel\', \'labels\', \'logits\', \'dim\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'-1\', \'None\'], " + } + member_method { + name: "softplus" + argspec: "args=[\'features\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "softsign" + argspec: "args=[\'features\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_softmax_cross_entropy_with_logits" + argspec: "args=[\'_sentinel\', \'labels\', \'logits\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "sufficient_statistics" + argspec: "args=[\'x\', \'axes\', \'shift\', \'keep_dims\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\'], " + } + member_method { + name: "tanh" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "top_k" + argspec: "args=[\'input\', \'k\', \'sorted\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'True\', \'None\'], " + } + member_method { + name: "uniform_candidate_sampler" + argspec: "args=[\'true_classes\', \'num_true\', \'num_sampled\', \'unique\', \'range_max\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "weighted_cross_entropy_with_logits" + argspec: "args=[\'targets\', \'logits\', \'pos_weight\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "weighted_moments" + argspec: "args=[\'x\', \'axes\', \'frequency_weights\', \'name\', \'keep_dims\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], " + } + member_method { + name: "with_space_to_batch" + argspec: "args=[\'input\', \'dilation_rate\', \'padding\', \'op\', \'filter_shape\', \'spatial_dims\', \'data_format\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "xw_plus_b" + argspec: "args=[\'x\', \'weights\', \'biases\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "zero_fraction" + argspec: "args=[\'value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.ones_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.ones_initializer.pbtxt new file mode 100644 index 00000000000..d84ddc6eb00 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.ones_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.ones_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtype\'], varargs=None, keywords=None, defaults=[\"\"], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.orthogonal_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.orthogonal_initializer.pbtxt new file mode 100644 index 00000000000..c8e266e70cf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.orthogonal_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.orthogonal_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'gain\', \'dtype\', \'seed\'], varargs=None, keywords=None, defaults=[\'1.0\', \"\", \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.pbtxt b/tensorflow/tools/api/golden/tensorflow.pbtxt new file mode 100644 index 00000000000..7a1a7e7949d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.pbtxt @@ -0,0 +1,1947 @@ +path: "tensorflow" +tf_module { + member { + name: "AggregationMethod" + mtype: "" + } + member { + name: "AttrValue" + mtype: "" + } + member { + name: "AutoParallelOptions" + mtype: "" + } + member { + name: "COMPILER_VERSION" + mtype: "" + } + member { + name: "ConditionalAccumulator" + mtype: "" + } + member { + name: "ConditionalAccumulatorBase" + mtype: "" + } + member { + name: "ConfigProto" + mtype: "" + } + member { + name: "DType" + mtype: "" + } + member { + name: "DeviceSpec" + mtype: "" + } + member { + name: "Dimension" + mtype: "" + } + member { + name: "Event" + mtype: "" + } + member { + name: "FIFOQueue" + mtype: "" + } + member { + name: "FixedLenFeature" + mtype: "" + } + member { + name: "FixedLenSequenceFeature" + mtype: "" + } + member { + name: "FixedLengthRecordReader" + mtype: "" + } + member { + name: "GIT_VERSION" + mtype: "" + } + member { + name: "GPUOptions" + mtype: "" + } + member { + name: "GRAPH_DEF_VERSION" + mtype: "" + } + member { + name: "GRAPH_DEF_VERSION_MIN_CONSUMER" + mtype: "" + } + member { + name: "GRAPH_DEF_VERSION_MIN_PRODUCER" + mtype: "" + } + member { + name: "Graph" + mtype: "" + } + member { + name: "GraphDef" + mtype: "" + } + member { + name: "GraphKeys" + mtype: "" + } + member { + name: "GraphOptions" + mtype: "" + } + member { + name: "HistogramProto" + mtype: "" + } + member { + name: "IdentityReader" + mtype: "" + } + member { + name: "IndexedSlices" + mtype: "" + } + member { + name: "InteractiveSession" + mtype: "" + } + member { + name: "LogMessage" + mtype: "" + } + member { + name: "NameAttrList" + mtype: "" + } + member { + name: "NodeDef" + mtype: "" + } + member { + name: "OpError" + mtype: "" + } + member { + name: "Operation" + mtype: "" + } + member { + name: "OptimizerOptions" + mtype: "" + } + member { + name: "PaddingFIFOQueue" + mtype: "" + } + member { + name: "PriorityQueue" + mtype: "" + } + member { + name: "QUANTIZED_DTYPES" + mtype: "" + } + member { + name: "QueueBase" + mtype: "" + } + member { + name: "RandomShuffleQueue" + mtype: "" + } + member { + name: "ReaderBase" + mtype: "" + } + member { + name: "RegisterGradient" + mtype: "" + } + member { + name: "RewriterConfig" + mtype: "" + } + member { + name: "RunMetadata" + mtype: "" + } + member { + name: "RunOptions" + mtype: "" + } + member { + name: "Session" + mtype: "" + } + member { + name: "SessionLog" + mtype: "" + } + member { + name: "SparseConditionalAccumulator" + mtype: "" + } + member { + name: "SparseFeature" + mtype: "" + } + member { + name: "SparseTensor" + mtype: "" + } + member { + name: "SparseTensorValue" + mtype: "" + } + member { + name: "Summary" + mtype: "" + } + member { + name: "TFRecordReader" + mtype: "" + } + member { + name: "Tensor" + mtype: "" + } + member { + name: "TensorArray" + mtype: "" + } + member { + name: "TensorInfo" + mtype: "" + } + member { + name: "TensorShape" + mtype: "" + } + member { + name: "TextLineReader" + mtype: "" + } + member { + name: "VERSION" + mtype: "" + } + member { + name: "VarLenFeature" + mtype: "" + } + member { + name: "Variable" + mtype: "" + } + member { + name: "VariableScope" + mtype: "" + } + member { + name: "WholeFileReader" + mtype: "" + } + member { + name: "app" + mtype: "" + } + member { + name: "bfloat16" + mtype: "" + } + member { + name: "bool" + mtype: "" + } + member { + name: "compat" + mtype: "" + } + member { + name: "complex128" + mtype: "" + } + member { + name: "complex64" + mtype: "" + } + member { + name: "constant_initializer" + mtype: "" + } + member { + name: "contrib" + mtype: "" + } + member { + name: "double" + mtype: "" + } + member { + name: "errors" + mtype: "" + } + member { + name: "estimator" + mtype: "" + } + member { + name: "flags" + mtype: "" + } + member { + name: "float16" + mtype: "" + } + member { + name: "float32" + mtype: "" + } + member { + name: "float64" + mtype: "" + } + member { + name: "gfile" + mtype: "" + } + member { + name: "graph_util" + mtype: "" + } + member { + name: "half" + mtype: "" + } + member { + name: "image" + mtype: "" + } + member { + name: "int16" + mtype: "" + } + member { + name: "int32" + mtype: "" + } + member { + name: "int64" + mtype: "" + } + member { + name: "int8" + mtype: "" + } + member { + name: "layers" + mtype: "" + } + member { + name: "logging" + mtype: "" + } + member { + name: "losses" + mtype: "" + } + member { + name: "metrics" + mtype: "" + } + member { + name: "newaxis" + mtype: "" + } + member { + name: "nn" + mtype: "" + } + member { + name: "ones_initializer" + mtype: "" + } + member { + name: "orthogonal_initializer" + mtype: "" + } + member { + name: "python_io" + mtype: "" + } + member { + name: "pywrap_tensorflow" + mtype: "" + } + member { + name: "qint16" + mtype: "" + } + member { + name: "qint32" + mtype: "" + } + member { + name: "qint8" + mtype: "" + } + member { + name: "quint16" + mtype: "" + } + member { + name: "quint8" + mtype: "" + } + member { + name: "random_normal_initializer" + mtype: "" + } + member { + name: "random_uniform_initializer" + mtype: "" + } + member { + name: "resource" + mtype: "" + } + member { + name: "resource_loader" + mtype: "" + } + member { + name: "saved_model" + mtype: "" + } + member { + name: "sdca" + mtype: "" + } + member { + name: "sets" + mtype: "" + } + member { + name: "spectral" + mtype: "" + } + member { + name: "string" + mtype: "" + } + member { + name: "summary" + mtype: "" + } + member { + name: "sysconfig" + mtype: "" + } + member { + name: "test" + mtype: "" + } + member { + name: "train" + mtype: "" + } + member { + name: "truncated_normal_initializer" + mtype: "" + } + member { + name: "uint16" + mtype: "" + } + member { + name: "uint8" + mtype: "" + } + member { + name: "uniform_unit_scaling_initializer" + mtype: "" + } + member { + name: "user_ops" + mtype: "" + } + member { + name: "zeros_initializer" + mtype: "" + } + member_method { + name: "Assert" + argspec: "args=[\'condition\', \'data\', \'summarize\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "NoGradient" + argspec: "args=[\'op_type\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "NotDifferentiable" + argspec: "args=[\'op_type\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Print" + argspec: "args=[\'input_\', \'data\', \'message\', \'first_n\', \'summarize\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "abs" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "accumulate_n" + argspec: "args=[\'inputs\', \'shape\', \'tensor_dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "acos" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "add" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "add_check_numerics_ops" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "add_n" + argspec: "args=[\'inputs\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "add_to_collection" + argspec: "args=[\'name\', \'value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "all_variables" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "arg_max" + argspec: "args=[\'input\', \'dimension\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "arg_min" + argspec: "args=[\'input\', \'dimension\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "argmax" + argspec: "args=[\'input\', \'axis\', \'name\', \'dimension\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "argmin" + argspec: "args=[\'input\', \'axis\', \'name\', \'dimension\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "as_dtype" + argspec: "args=[\'type_value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_string" + argspec: "args=[\'input\', \'precision\', \'scientific\', \'shortest\', \'width\', \'fill\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "asin" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "assert_equal" + argspec: "args=[\'x\', \'y\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_greater" + argspec: "args=[\'x\', \'y\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_greater_equal" + argspec: "args=[\'x\', \'y\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_integer" + argspec: "args=[\'x\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "assert_less" + argspec: "args=[\'x\', \'y\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_less_equal" + argspec: "args=[\'x\', \'y\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_negative" + argspec: "args=[\'x\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_non_negative" + argspec: "args=[\'x\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_non_positive" + argspec: "args=[\'x\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_none_equal" + argspec: "args=[\'x\', \'y\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_positive" + argspec: "args=[\'x\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_proper_iterable" + argspec: "args=[\'values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "assert_rank" + argspec: "args=[\'x\', \'rank\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_rank_at_least" + argspec: "args=[\'x\', \'rank\', \'data\', \'summarize\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "assert_same_float_dtype" + argspec: "args=[\'tensors\', \'dtype\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "assert_scalar" + argspec: "args=[\'tensor\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "assert_type" + argspec: "args=[\'tensor\', \'tf_type\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "assert_variables_initialized" + argspec: "args=[\'var_list\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "assign" + argspec: "args=[\'ref\', \'value\', \'validate_shape\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "assign_add" + argspec: "args=[\'ref\', \'value\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "assign_sub" + argspec: "args=[\'ref\', \'value\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "atan" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "batch_to_space" + argspec: "args=[\'input\', \'crops\', \'block_size\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "batch_to_space_nd" + argspec: "args=[\'input\', \'block_shape\', \'crops\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "betainc" + argspec: "args=[\'a\', \'b\', \'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "bincount" + argspec: "args=[\'arr\', \'minlength\', \'maxlength\', \'weights\', \'dtype\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \"\"], " + } + member_method { + name: "bitcast" + argspec: "args=[\'input\', \'type\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "boolean_mask" + argspec: "args=[\'tensor\', \'mask\', \'name\'], varargs=None, keywords=None, defaults=[\'boolean_mask\'], " + } + member_method { + name: "broadcast_dynamic_shape" + argspec: "args=[\'shape_x\', \'shape_y\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "broadcast_static_shape" + argspec: "args=[\'shape_x\', \'shape_y\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "case" + argspec: "args=[\'pred_fn_pairs\', \'default\', \'exclusive\', \'strict\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'False\', \'case\'], " + } + member_method { + name: "cast" + argspec: "args=[\'x\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ceil" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "check_numerics" + argspec: "args=[\'tensor\', \'message\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "cholesky" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "cholesky_solve" + argspec: "args=[\'chol\', \'rhs\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "clip_by_average_norm" + argspec: "args=[\'t\', \'clip_norm\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "clip_by_global_norm" + argspec: "args=[\'t_list\', \'clip_norm\', \'use_norm\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "clip_by_norm" + argspec: "args=[\'t\', \'clip_norm\', \'axes\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "clip_by_value" + argspec: "args=[\'t\', \'clip_value_min\', \'clip_value_max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "complex" + argspec: "args=[\'real\', \'imag\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "concat" + argspec: "args=[\'values\', \'axis\', \'name\'], varargs=None, keywords=None, defaults=[\'concat\'], " + } + member_method { + name: "cond" + argspec: "args=[\'pred\', \'fn1\', \'fn2\', \'strict\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "confusion_matrix" + argspec: "args=[\'labels\', \'predictions\', \'num_classes\', \'dtype\', \'name\', \'weights\'], varargs=None, keywords=None, defaults=[\'None\', \"\", \'None\', \'None\'], " + } + member_method { + name: "conj" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "constant" + argspec: "args=[\'value\', \'dtype\', \'shape\', \'name\', \'verify_shape\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'Const\', \'False\'], " + } + member_method { + name: "container" + argspec: "args=[\'container_name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "control_dependencies" + argspec: "args=[\'control_inputs\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "convert_to_tensor" + argspec: "args=[\'value\', \'dtype\', \'name\', \'preferred_dtype\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "convert_to_tensor_or_indexed_slices" + argspec: "args=[\'value\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "convert_to_tensor_or_sparse_tensor" + argspec: "args=[\'value\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "cos" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "count_nonzero" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'dtype\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \"\", \'None\', \'None\'], " + } + member_method { + name: "count_up_to" + argspec: "args=[\'ref\', \'limit\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "create_partitioned_variables" + argspec: "args=[\'shape\', \'slicing\', \'initializer\', \'dtype\', \'trainable\', \'collections\', \'name\', \'reuse\'], varargs=None, keywords=None, defaults=[\"\", \'True\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "cross" + argspec: "args=[\'a\', \'b\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "cumprod" + argspec: "args=[\'x\', \'axis\', \'exclusive\', \'reverse\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "cumsum" + argspec: "args=[\'x\', \'axis\', \'exclusive\', \'reverse\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "decode_base64" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "decode_csv" + argspec: "args=[\'records\', \'record_defaults\', \'field_delim\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "decode_json_example" + argspec: "args=[\'json_examples\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "decode_raw" + argspec: "args=[\'bytes\', \'out_type\', \'little_endian\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "delete_session_tensor" + argspec: "args=[\'handle\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "depth_to_space" + argspec: "args=[\'input\', \'block_size\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dequantize" + argspec: "args=[\'input\', \'min_range\', \'max_range\', \'mode\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "deserialize_many_sparse" + argspec: "args=[\'serialized_sparse\', \'dtype\', \'rank\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "device" + argspec: "args=[\'device_name_or_function\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "diag" + argspec: "args=[\'diagonal\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "diag_part" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "digamma" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "div" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "divide" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dynamic_partition" + argspec: "args=[\'data\', \'partitions\', \'num_partitions\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "dynamic_stitch" + argspec: "args=[\'indices\', \'data\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "edit_distance" + argspec: "args=[\'hypothesis\', \'truth\', \'normalize\', \'name\'], varargs=None, keywords=None, defaults=[\'True\', \'edit_distance\'], " + } + member_method { + name: "einsum" + argspec: "args=[\'equation\'], varargs=inputs, keywords=None, defaults=None" + } + member_method { + name: "encode_base64" + argspec: "args=[\'input\', \'pad\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "equal" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "erf" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "erfc" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "exp" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "expand_dims" + argspec: "args=[\'input\', \'axis\', \'name\', \'dim\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "expm1" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "extract_image_patches" + argspec: "args=[\'images\', \'ksizes\', \'strides\', \'rates\', \'padding\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "eye" + argspec: "args=[\'num_rows\', \'num_columns\', \'batch_shape\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \"\", \'None\'], " + } + member_method { + name: "fake_quant_with_min_max_args" + argspec: "args=[\'inputs\', \'min\', \'max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "fake_quant_with_min_max_args_gradient" + argspec: "args=[\'gradients\', \'inputs\', \'min\', \'max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "fake_quant_with_min_max_vars" + argspec: "args=[\'inputs\', \'min\', \'max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fake_quant_with_min_max_vars_gradient" + argspec: "args=[\'gradients\', \'inputs\', \'min\', \'max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fake_quant_with_min_max_vars_per_channel" + argspec: "args=[\'inputs\', \'min\', \'max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fake_quant_with_min_max_vars_per_channel_gradient" + argspec: "args=[\'gradients\', \'inputs\', \'min\', \'max\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fft" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fft2d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fft3d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fill" + argspec: "args=[\'dims\', \'value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fixed_size_partitioner" + argspec: "args=[\'num_shards\', \'axis\'], varargs=None, keywords=None, defaults=[\'0\'], " + } + member_method { + name: "floor" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "floor_div" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "floordiv" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "floormod" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "foldl" + argspec: "args=[\'fn\', \'elems\', \'initializer\', \'parallel_iterations\', \'back_prop\', \'swap_memory\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'10\', \'True\', \'False\', \'None\'], " + } + member_method { + name: "foldr" + argspec: "args=[\'fn\', \'elems\', \'initializer\', \'parallel_iterations\', \'back_prop\', \'swap_memory\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'10\', \'True\', \'False\', \'None\'], " + } + member_method { + name: "gather" + argspec: "args=[\'params\', \'indices\', \'validate_indices\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "gather_nd" + argspec: "args=[\'params\', \'indices\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_collection" + argspec: "args=[\'key\', \'scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_collection_ref" + argspec: "args=[\'key\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_default_graph" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_default_session" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_local_variable" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "get_seed" + argspec: "args=[\'op_seed\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_session_handle" + argspec: "args=[\'data\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_session_tensor" + argspec: "args=[\'handle\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_variable" + argspec: "args=[\'name\', \'shape\', \'dtype\', \'initializer\', \'regularizer\', \'trainable\', \'collections\', \'caching_device\', \'partitioner\', \'validate_shape\', \'use_resource\', \'custom_getter\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\', \'None\', \'True\', \'None\', \'None\'], " + } + member_method { + name: "get_variable_scope" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "global_norm" + argspec: "args=[\'t_list\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "global_variables" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "global_variables_initializer" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "gradients" + argspec: "args=[\'ys\', \'xs\', \'grad_ys\', \'name\', \'colocate_gradients_with_ops\', \'gate_gradients\', \'aggregation_method\'], varargs=None, keywords=None, defaults=[\'None\', \'gradients\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "greater" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "greater_equal" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "group" + argspec: "args=[], varargs=inputs, keywords=kwargs, defaults=None" + } + member_method { + name: "hessians" + argspec: "args=[\'ys\', \'xs\', \'name\', \'colocate_gradients_with_ops\', \'gate_gradients\', \'aggregation_method\'], varargs=None, keywords=None, defaults=[\'hessians\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "histogram_fixed_width" + argspec: "args=[\'values\', \'value_range\', \'nbins\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'100\', \"\", \'None\'], " + } + member_method { + name: "identity" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ifft" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ifft2d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ifft3d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "igamma" + argspec: "args=[\'a\', \'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "igammac" + argspec: "args=[\'a\', \'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "imag" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "import_graph_def" + argspec: "args=[\'graph_def\', \'input_map\', \'return_elements\', \'name\', \'op_dict\', \'producer_op_list\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "initialize_all_tables" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "initialize_all_variables" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "initialize_local_variables" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "initialize_variables" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "invert_permutation" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "is_finite" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "is_inf" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "is_nan" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "is_non_decreasing" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "is_numeric_tensor" + argspec: "args=[\'tensor\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_strictly_increasing" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "is_variable_initialized" + argspec: "args=[\'variable\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "lbeta" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'lbeta\'], " + } + member_method { + name: "less" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "less_equal" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "lgamma" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "lin_space" + argspec: "args=[\'start\', \'stop\', \'num\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "linspace" + argspec: "args=[\'start\', \'stop\', \'num\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "load_file_system_library" + argspec: "args=[\'library_filename\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "load_op_library" + argspec: "args=[\'library_filename\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "local_variables" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "local_variables_initializer" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "log" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "log1p" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "logical_and" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "logical_not" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "logical_or" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "logical_xor" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'LogicalXor\'], " + } + member_method { + name: "make_template" + argspec: "args=[\'name_\', \'func_\', \'create_scope_now_\', \'unique_name_\', \'custom_getter_\'], varargs=None, keywords=kwargs, defaults=[\'False\', \'None\', \'None\'], " + } + member_method { + name: "map_fn" + argspec: "args=[\'fn\', \'elems\', \'dtype\', \'parallel_iterations\', \'back_prop\', \'swap_memory\', \'infer_shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'10\', \'True\', \'False\', \'True\', \'None\'], " + } + member_method { + name: "matching_files" + argspec: "args=[\'pattern\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "matmul" + argspec: "args=[\'a\', \'b\', \'transpose_a\', \'transpose_b\', \'adjoint_a\', \'adjoint_b\', \'a_is_sparse\', \'b_is_sparse\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'False\', \'False\', \'False\', \'False\', \'False\', \'None\'], " + } + member_method { + name: "matrix_band_part" + argspec: "args=[\'input\', \'num_lower\', \'num_upper\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "matrix_determinant" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "matrix_diag" + argspec: "args=[\'diagonal\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "matrix_diag_part" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "matrix_inverse" + argspec: "args=[\'input\', \'adjoint\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "matrix_set_diag" + argspec: "args=[\'input\', \'diagonal\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "matrix_solve" + argspec: "args=[\'matrix\', \'rhs\', \'adjoint\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "matrix_solve_ls" + argspec: "args=[\'matrix\', \'rhs\', \'l2_regularizer\', \'fast\', \'name\'], varargs=None, keywords=None, defaults=[\'0.0\', \'True\', \'None\'], " + } + member_method { + name: "matrix_transpose" + argspec: "args=[\'a\', \'name\'], varargs=None, keywords=None, defaults=[\'matrix_transpose\'], " + } + member_method { + name: "matrix_triangular_solve" + argspec: "args=[\'matrix\', \'rhs\', \'lower\', \'adjoint\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "maximum" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "meshgrid" + argspec: "args=[], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "min_max_variable_partitioner" + argspec: "args=[\'max_partitions\', \'axis\', \'min_slice_size\', \'bytes_per_string_element\'], varargs=None, keywords=None, defaults=[\'1\', \'0\', \'262144\', \'16\'], " + } + member_method { + name: "minimum" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "mod" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "model_variables" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "moving_average_variables" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "multinomial" + argspec: "args=[\'logits\', \'num_samples\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "multiply" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "name_scope" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "negative" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "no_op" + argspec: "args=[\'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "no_regularizer" + argspec: "args=[\'_\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "norm" + argspec: "args=[\'tensor\', \'ord\', \'axis\', \'keep_dims\', \'name\'], varargs=None, keywords=None, defaults=[\'euclidean\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "not_equal" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "one_hot" + argspec: "args=[\'indices\', \'depth\', \'on_value\', \'off_value\', \'axis\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "ones" + argspec: "args=[\'shape\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\"\", \'None\'], " + } + member_method { + name: "ones_like" + argspec: "args=[\'tensor\', \'dtype\', \'name\', \'optimize\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'True\'], " + } + member_method { + name: "op_scope" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "pad" + argspec: "args=[\'tensor\', \'paddings\', \'mode\', \'name\'], varargs=None, keywords=None, defaults=[\'CONSTANT\', \'None\'], " + } + member_method { + name: "parallel_stack" + argspec: "args=[\'values\', \'name\'], varargs=None, keywords=None, defaults=[\'parallel_stack\'], " + } + member_method { + name: "parse_example" + argspec: "args=[\'serialized\', \'features\', \'name\', \'example_names\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "parse_single_example" + argspec: "args=[\'serialized\', \'features\', \'name\', \'example_names\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "parse_single_sequence_example" + argspec: "args=[\'serialized\', \'context_features\', \'sequence_features\', \'example_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "parse_tensor" + argspec: "args=[\'serialized\', \'out_type\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "placeholder" + argspec: "args=[\'dtype\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "placeholder_with_default" + argspec: "args=[\'input\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "polygamma" + argspec: "args=[\'a\', \'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "pow" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "py_func" + argspec: "args=[\'func\', \'inp\', \'Tout\', \'stateful\', \'name\'], varargs=None, keywords=None, defaults=[\'True\', \'None\'], " + } + member_method { + name: "qr" + argspec: "args=[\'input\', \'full_matrices\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "quantize_v2" + argspec: "args=[\'input\', \'min_range\', \'max_range\', \'T\', \'mode\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "quantized_concat" + argspec: "args=[\'concat_dim\', \'values\', \'input_mins\', \'input_maxes\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "random_crop" + argspec: "args=[\'value\', \'size\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "random_gamma" + argspec: "args=[\'shape\', \'alpha\', \'beta\', \'dtype\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \"\", \'None\', \'None\'], " + } + member_method { + name: "random_normal" + argspec: "args=[\'shape\', \'mean\', \'stddev\', \'dtype\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \"\", \'None\', \'None\'], " + } + member_method { + name: "random_poisson" + argspec: "args=[\'lam\', \'shape\', \'dtype\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\"\", \'None\', \'None\'], " + } + member_method { + name: "random_shuffle" + argspec: "args=[\'value\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "random_uniform" + argspec: "args=[\'shape\', \'minval\', \'maxval\', \'dtype\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'None\', \"\", \'None\', \'None\'], " + } + member_method { + name: "range" + argspec: "args=[\'start\', \'limit\', \'delta\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'range\'], " + } + member_method { + name: "rank" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "read_file" + argspec: "args=[\'filename\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "real" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "realdiv" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reciprocal" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reduce_all" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_any" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_join" + argspec: "args=[\'inputs\', \'axis\', \'keep_dims\', \'separator\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'\', \'None\', \'None\'], " + } + member_method { + name: "reduce_logsumexp" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_max" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_mean" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_min" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_prod" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "reduce_sum" + argspec: "args=[\'input_tensor\', \'axis\', \'keep_dims\', \'name\', \'reduction_indices\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "register_tensor_conversion_function" + argspec: "args=[\'base_type\', \'conversion_func\', \'priority\'], varargs=None, keywords=None, defaults=[\'100\'], " + } + member_method { + name: "report_uninitialized_variables" + argspec: "args=[\'var_list\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'report_uninitialized_variables\'], " + } + member_method { + name: "required_space_to_batch_paddings" + argspec: "args=[\'input_shape\', \'block_shape\', \'base_paddings\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "reset_default_graph" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "reshape" + argspec: "args=[\'tensor\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reverse" + argspec: "args=[\'tensor\', \'axis\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "reverse_sequence" + argspec: "args=[\'input\', \'seq_lengths\', \'seq_axis\', \'batch_axis\', \'name\', \'seq_dim\', \'batch_dim\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "reverse_v2" + argspec: "args=[\'tensor\', \'axis\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "rint" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "round" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "rsqrt" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "saturate_cast" + argspec: "args=[\'value\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "scalar_mul" + argspec: "args=[\'scalar\', \'x\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "scan" + argspec: "args=[\'fn\', \'elems\', \'initializer\', \'parallel_iterations\', \'back_prop\', \'swap_memory\', \'infer_shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'10\', \'True\', \'False\', \'True\', \'None\'], " + } + member_method { + name: "scatter_add" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_div" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_mul" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_nd" + argspec: "args=[\'indices\', \'updates\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "scatter_nd_add" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_nd_sub" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_nd_update" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_sub" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "scatter_update" + argspec: "args=[\'ref\', \'indices\', \'updates\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "segment_max" + argspec: "args=[\'data\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "segment_mean" + argspec: "args=[\'data\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "segment_min" + argspec: "args=[\'data\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "segment_prod" + argspec: "args=[\'data\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "segment_sum" + argspec: "args=[\'data\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "self_adjoint_eig" + argspec: "args=[\'tensor\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "self_adjoint_eigvals" + argspec: "args=[\'tensor\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sequence_mask" + argspec: "args=[\'lengths\', \'maxlen\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \"\", \'None\'], " + } + member_method { + name: "serialize_many_sparse" + argspec: "args=[\'sp_input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "serialize_sparse" + argspec: "args=[\'sp_input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "set_random_seed" + argspec: "args=[\'seed\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "setdiff1d" + argspec: "args=[\'x\', \'y\', \'index_dtype\', \'name\'], varargs=None, keywords=None, defaults=[\"\", \'None\'], " + } + member_method { + name: "shape" + argspec: "args=[\'input\', \'name\', \'out_type\'], varargs=None, keywords=None, defaults=[\'None\', \"\"], " + } + member_method { + name: "shape_n" + argspec: "args=[\'input\', \'out_type\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "sigmoid" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sign" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sin" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "size" + argspec: "args=[\'input\', \'name\', \'out_type\'], varargs=None, keywords=None, defaults=[\'None\', \"\"], " + } + member_method { + name: "slice" + argspec: "args=[\'input_\', \'begin\', \'size\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "space_to_batch" + argspec: "args=[\'input\', \'paddings\', \'block_size\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "space_to_batch_nd" + argspec: "args=[\'input\', \'block_shape\', \'paddings\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "space_to_depth" + argspec: "args=[\'input\', \'block_size\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_add" + argspec: "args=[\'a\', \'b\', \'thresh\'], varargs=None, keywords=None, defaults=[\'0\'], " + } + member_method { + name: "sparse_concat" + argspec: "args=[\'axis\', \'sp_inputs\', \'name\', \'expand_nonconcat_dim\', \'concat_dim\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\'], " + } + member_method { + name: "sparse_fill_empty_rows" + argspec: "args=[\'sp_input\', \'default_value\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_mask" + argspec: "args=[\'a\', \'mask_indices\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_matmul" + argspec: "args=[\'a\', \'b\', \'transpose_a\', \'transpose_b\', \'a_is_sparse\', \'b_is_sparse\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "sparse_maximum" + argspec: "args=[\'sp_a\', \'sp_b\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_merge" + argspec: "args=[\'sp_ids\', \'sp_values\', \'vocab_size\', \'name\', \'already_sorted\'], varargs=None, keywords=None, defaults=[\'None\', \'False\'], " + } + member_method { + name: "sparse_minimum" + argspec: "args=[\'sp_a\', \'sp_b\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_placeholder" + argspec: "args=[\'dtype\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "sparse_reduce_sum" + argspec: "args=[\'sp_input\', \'axis\', \'keep_dims\', \'reduction_axes\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\'], " + } + member_method { + name: "sparse_reduce_sum_sparse" + argspec: "args=[\'sp_input\', \'axis\', \'keep_dims\', \'reduction_axes\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\'], " + } + member_method { + name: "sparse_reorder" + argspec: "args=[\'sp_input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_reset_shape" + argspec: "args=[\'sp_input\', \'new_shape\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_reshape" + argspec: "args=[\'sp_input\', \'shape\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_retain" + argspec: "args=[\'sp_input\', \'to_retain\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "sparse_segment_mean" + argspec: "args=[\'data\', \'indices\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_segment_sqrt_n" + argspec: "args=[\'data\', \'indices\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_segment_sum" + argspec: "args=[\'data\', \'indices\', \'segment_ids\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_softmax" + argspec: "args=[\'sp_input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_split" + argspec: "args=[\'keyword_required\', \'sp_input\', \'num_split\', \'axis\', \'name\', \'split_dim\'], varargs=None, keywords=None, defaults=[\'KeywordRequired()\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "sparse_tensor_dense_matmul" + argspec: "args=[\'sp_a\', \'b\', \'adjoint_a\', \'adjoint_b\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'False\', \'None\'], " + } + member_method { + name: "sparse_tensor_to_dense" + argspec: "args=[\'sp_input\', \'default_value\', \'validate_indices\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'True\', \'None\'], " + } + member_method { + name: "sparse_to_dense" + argspec: "args=[\'sparse_indices\', \'output_shape\', \'sparse_values\', \'default_value\', \'validate_indices\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'True\', \'None\'], " + } + member_method { + name: "sparse_to_indicator" + argspec: "args=[\'sp_input\', \'vocab_size\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "sparse_transpose" + argspec: "args=[\'sp_input\', \'perm\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "split" + argspec: "args=[\'value\', \'num_or_size_splits\', \'axis\', \'num\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'None\', \'split\'], " + } + member_method { + name: "sqrt" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "square" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "squared_difference" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "squeeze" + argspec: "args=[\'input\', \'axis\', \'name\', \'squeeze_dims\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "stack" + argspec: "args=[\'values\', \'axis\', \'name\'], varargs=None, keywords=None, defaults=[\'0\', \'stack\'], " + } + member_method { + name: "stop_gradient" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "strided_slice" + argspec: "args=[\'input_\', \'begin\', \'end\', \'strides\', \'begin_mask\', \'end_mask\', \'ellipsis_mask\', \'new_axis_mask\', \'shrink_axis_mask\', \'var\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'0\', \'0\', \'0\', \'0\', \'0\', \'None\', \'None\'], " + } + member_method { + name: "string_join" + argspec: "args=[\'inputs\', \'separator\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "string_split" + argspec: "args=[\'source\', \'delimiter\'], varargs=None, keywords=None, defaults=[\' \'], " + } + member_method { + name: "string_to_hash_bucket" + argspec: "args=[\'string_tensor\', \'num_buckets\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "string_to_hash_bucket_fast" + argspec: "args=[\'input\', \'num_buckets\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "string_to_hash_bucket_strong" + argspec: "args=[\'input\', \'num_buckets\', \'key\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "string_to_number" + argspec: "args=[\'string_tensor\', \'out_type\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "substr" + argspec: "args=[\'input\', \'pos\', \'len\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "subtract" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "svd" + argspec: "args=[\'tensor\', \'full_matrices\', \'compute_uv\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'True\', \'None\'], " + } + member_method { + name: "tables_initializer" + argspec: "args=[\'name\'], varargs=None, keywords=None, defaults=[\'init_all_tables\'], " + } + member_method { + name: "tan" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "tanh" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "tensordot" + argspec: "args=[\'a\', \'b\', \'axes\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "tile" + argspec: "args=[\'input\', \'multiples\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "to_bfloat16" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'ToBFloat16\'], " + } + member_method { + name: "to_double" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'ToDouble\'], " + } + member_method { + name: "to_float" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'ToFloat\'], " + } + member_method { + name: "to_int32" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'ToInt32\'], " + } + member_method { + name: "to_int64" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'ToInt64\'], " + } + member_method { + name: "trace" + argspec: "args=[\'x\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "trainable_variables" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "transpose" + argspec: "args=[\'a\', \'perm\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'transpose\'], " + } + member_method { + name: "truediv" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "truncated_normal" + argspec: "args=[\'shape\', \'mean\', \'stddev\', \'dtype\', \'seed\', \'name\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \"\", \'None\', \'None\'], " + } + member_method { + name: "truncatediv" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "truncatemod" + argspec: "args=[\'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "tuple" + argspec: "args=[\'tensors\', \'name\', \'control_inputs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "unique" + argspec: "args=[\'x\', \'out_idx\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "unique_with_counts" + argspec: "args=[\'x\', \'out_idx\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "unsorted_segment_max" + argspec: "args=[\'data\', \'segment_ids\', \'num_segments\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "unsorted_segment_sum" + argspec: "args=[\'data\', \'segment_ids\', \'num_segments\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "unstack" + argspec: "args=[\'value\', \'num\', \'axis\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'0\', \'unstack\'], " + } + member_method { + name: "variable_axis_size_partitioner" + argspec: "args=[\'max_shard_bytes\', \'axis\', \'bytes_per_string_element\', \'max_shards\'], varargs=None, keywords=None, defaults=[\'0\', \'16\', \'None\'], " + } + member_method { + name: "variable_op_scope" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "variable_scope" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "variables_initializer" + argspec: "args=[\'var_list\', \'name\'], varargs=None, keywords=None, defaults=[\'init\'], " + } + member_method { + name: "verify_tensor_all_finite" + argspec: "args=[\'t\', \'msg\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "where" + argspec: "args=[\'condition\', \'x\', \'y\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "while_loop" + argspec: "args=[\'cond\', \'body\', \'loop_vars\', \'shape_invariants\', \'parallel_iterations\', \'back_prop\', \'swap_memory\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'10\', \'True\', \'False\', \'None\'], " + } + member_method { + name: "write_file" + argspec: "args=[\'filename\', \'contents\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "zeros" + argspec: "args=[\'shape\', \'dtype\', \'name\'], varargs=None, keywords=None, defaults=[\"\", \'None\'], " + } + member_method { + name: "zeros_like" + argspec: "args=[\'tensor\', \'dtype\', \'name\', \'optimize\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'True\'], " + } + member_method { + name: "zeta" + argspec: "args=[\'x\', \'q\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-compression-type.pbtxt b/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-compression-type.pbtxt new file mode 100644 index 00000000000..4941dda50e4 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-compression-type.pbtxt @@ -0,0 +1,20 @@ +path: "tensorflow.python_io.TFRecordCompressionType" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "GZIP" + mtype: "" + } + member { + name: "NONE" + mtype: "" + } + member { + name: "ZLIB" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-options.pbtxt b/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-options.pbtxt new file mode 100644 index 00000000000..0853716023a --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-options.pbtxt @@ -0,0 +1,17 @@ +path: "tensorflow.python_io.TFRecordOptions" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "compression_type_map" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'compression_type\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_compression_type_string" + argspec: "args=[\'cls\', \'options\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-writer.pbtxt b/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-writer.pbtxt new file mode 100644 index 00000000000..af0c11ca14d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.python_io.-t-f-record-writer.pbtxt @@ -0,0 +1,17 @@ +path: "tensorflow.python_io.TFRecordWriter" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'path\', \'options\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "write" + argspec: "args=[\'self\', \'record\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.python_io.pbtxt b/tensorflow/tools/api/golden/tensorflow.python_io.pbtxt new file mode 100644 index 00000000000..7c9953e5fe3 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.python_io.pbtxt @@ -0,0 +1,19 @@ +path: "tensorflow.python_io" +tf_module { + member { + name: "TFRecordCompressionType" + mtype: "" + } + member { + name: "TFRecordOptions" + mtype: "" + } + member { + name: "TFRecordWriter" + mtype: "" + } + member_method { + name: "tf_record_iterator" + argspec: "args=[\'path\', \'options\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.random_normal_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.random_normal_initializer.pbtxt new file mode 100644 index 00000000000..70308bc6014 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.random_normal_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.random_normal_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'mean\', \'stddev\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \"\"], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.random_uniform_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.random_uniform_initializer.pbtxt new file mode 100644 index 00000000000..37bb1956e82 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.random_uniform_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.random_uniform_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'minval\', \'maxval\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'0\', \'None\', \'None\', \"\"], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.resource_loader.pbtxt b/tensorflow/tools/api/golden/tensorflow.resource_loader.pbtxt new file mode 100644 index 00000000000..288b78b4cd0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.resource_loader.pbtxt @@ -0,0 +1,23 @@ +path: "tensorflow.resource_loader" +tf_module { + member_method { + name: "get_data_files_path" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_path_to_datafile" + argspec: "args=[\'path\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_root_dir_with_all_resources" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "load_resource" + argspec: "args=[\'path\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "readahead_file_path" + argspec: "args=[\'path\', \'readahead\'], varargs=None, keywords=None, defaults=[\'128M\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.builder.-saved-model-builder.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.builder.-saved-model-builder.pbtxt new file mode 100644 index 00000000000..56d76902fd0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.builder.-saved-model-builder.pbtxt @@ -0,0 +1,21 @@ +path: "tensorflow.saved_model.builder.SavedModelBuilder" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'export_dir\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "add_meta_graph" + argspec: "args=[\'self\', \'tags\', \'signature_def_map\', \'assets_collection\', \'legacy_init_op\', \'clear_devices\', \'main_op\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "add_meta_graph_and_variables" + argspec: "args=[\'self\', \'sess\', \'tags\', \'signature_def_map\', \'assets_collection\', \'legacy_init_op\', \'clear_devices\', \'main_op\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "save" + argspec: "args=[\'self\', \'as_text\'], varargs=None, keywords=None, defaults=[\'False\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.builder.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.builder.pbtxt new file mode 100644 index 00000000000..adc697ad1c0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.builder.pbtxt @@ -0,0 +1,7 @@ +path: "tensorflow.saved_model.builder" +tf_module { + member { + name: "SavedModelBuilder" + mtype: "" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.constants.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.constants.pbtxt new file mode 100644 index 00000000000..20e10aa094f --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.constants.pbtxt @@ -0,0 +1,39 @@ +path: "tensorflow.saved_model.constants" +tf_module { + member { + name: "ASSETS_DIRECTORY" + mtype: "" + } + member { + name: "ASSETS_KEY" + mtype: "" + } + member { + name: "LEGACY_INIT_OP_KEY" + mtype: "" + } + member { + name: "MAIN_OP_KEY" + mtype: "" + } + member { + name: "SAVED_MODEL_FILENAME_PB" + mtype: "" + } + member { + name: "SAVED_MODEL_FILENAME_PBTXT" + mtype: "" + } + member { + name: "SAVED_MODEL_SCHEMA_VERSION" + mtype: "" + } + member { + name: "VARIABLES_DIRECTORY" + mtype: "" + } + member { + name: "VARIABLES_FILENAME" + mtype: "" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.loader.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.loader.pbtxt new file mode 100644 index 00000000000..896e2160c69 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.loader.pbtxt @@ -0,0 +1,11 @@ +path: "tensorflow.saved_model.loader" +tf_module { + member_method { + name: "load" + argspec: "args=[\'sess\', \'tags\', \'export_dir\'], varargs=None, keywords=saver_kwargs, defaults=None" + } + member_method { + name: "maybe_saved_model_directory" + argspec: "args=[\'export_dir\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.main_op.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.main_op.pbtxt new file mode 100644 index 00000000000..176cb788c24 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.main_op.pbtxt @@ -0,0 +1,11 @@ +path: "tensorflow.saved_model.main_op" +tf_module { + member_method { + name: "main_op" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "main_op_with_restore" + argspec: "args=[\'restore_op_name\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.pbtxt new file mode 100644 index 00000000000..5683766b289 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.pbtxt @@ -0,0 +1,35 @@ +path: "tensorflow.saved_model" +tf_module { + member { + name: "builder" + mtype: "" + } + member { + name: "constants" + mtype: "" + } + member { + name: "loader" + mtype: "" + } + member { + name: "main_op" + mtype: "" + } + member { + name: "signature_constants" + mtype: "" + } + member { + name: "signature_def_utils" + mtype: "" + } + member { + name: "tag_constants" + mtype: "" + } + member { + name: "utils" + mtype: "" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.signature_constants.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.signature_constants.pbtxt new file mode 100644 index 00000000000..478d410e066 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.signature_constants.pbtxt @@ -0,0 +1,47 @@ +path: "tensorflow.saved_model.signature_constants" +tf_module { + member { + name: "CLASSIFY_INPUTS" + mtype: "" + } + member { + name: "CLASSIFY_METHOD_NAME" + mtype: "" + } + member { + name: "CLASSIFY_OUTPUT_CLASSES" + mtype: "" + } + member { + name: "CLASSIFY_OUTPUT_SCORES" + mtype: "" + } + member { + name: "DEFAULT_SERVING_SIGNATURE_DEF_KEY" + mtype: "" + } + member { + name: "PREDICT_INPUTS" + mtype: "" + } + member { + name: "PREDICT_METHOD_NAME" + mtype: "" + } + member { + name: "PREDICT_OUTPUTS" + mtype: "" + } + member { + name: "REGRESS_INPUTS" + mtype: "" + } + member { + name: "REGRESS_METHOD_NAME" + mtype: "" + } + member { + name: "REGRESS_OUTPUTS" + mtype: "" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.signature_def_utils.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.signature_def_utils.pbtxt new file mode 100644 index 00000000000..e9867d84c3e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.signature_def_utils.pbtxt @@ -0,0 +1,19 @@ +path: "tensorflow.saved_model.signature_def_utils" +tf_module { + member_method { + name: "build_signature_def" + argspec: "args=[\'inputs\', \'outputs\', \'method_name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "classification_signature_def" + argspec: "args=[\'examples\', \'classes\', \'scores\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "predict_signature_def" + argspec: "args=[\'inputs\', \'outputs\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "regression_signature_def" + argspec: "args=[\'examples\', \'predictions\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.tag_constants.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.tag_constants.pbtxt new file mode 100644 index 00000000000..7c24b7ad3cf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.tag_constants.pbtxt @@ -0,0 +1,11 @@ +path: "tensorflow.saved_model.tag_constants" +tf_module { + member { + name: "SERVING" + mtype: "" + } + member { + name: "TRAINING" + mtype: "" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.saved_model.utils.pbtxt b/tensorflow/tools/api/golden/tensorflow.saved_model.utils.pbtxt new file mode 100644 index 00000000000..bc150e56a36 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.saved_model.utils.pbtxt @@ -0,0 +1,7 @@ +path: "tensorflow.saved_model.utils" +tf_module { + member_method { + name: "build_tensor_info" + argspec: "args=[\'tensor\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.sdca.pbtxt b/tensorflow/tools/api/golden/tensorflow.sdca.pbtxt new file mode 100644 index 00000000000..7c9e6a25c7a --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.sdca.pbtxt @@ -0,0 +1,3 @@ +path: "tensorflow.sdca" +tf_module { +} diff --git a/tensorflow/tools/api/golden/tensorflow.sets.pbtxt b/tensorflow/tools/api/golden/tensorflow.sets.pbtxt new file mode 100644 index 00000000000..8a196b1a556 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.sets.pbtxt @@ -0,0 +1,19 @@ +path: "tensorflow.sets" +tf_module { + member_method { + name: "set_difference" + argspec: "args=[\'a\', \'b\', \'aminusb\', \'validate_indices\'], varargs=None, keywords=None, defaults=[\'True\', \'True\'], " + } + member_method { + name: "set_intersection" + argspec: "args=[\'a\', \'b\', \'validate_indices\'], varargs=None, keywords=None, defaults=[\'True\'], " + } + member_method { + name: "set_size" + argspec: "args=[\'a\', \'validate_indices\'], varargs=None, keywords=None, defaults=[\'True\'], " + } + member_method { + name: "set_union" + argspec: "args=[\'a\', \'b\', \'validate_indices\'], varargs=None, keywords=None, defaults=[\'True\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.spectral.pbtxt b/tensorflow/tools/api/golden/tensorflow.spectral.pbtxt new file mode 100644 index 00000000000..84883c1a395 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.spectral.pbtxt @@ -0,0 +1,51 @@ +path: "tensorflow.spectral" +tf_module { + member_method { + name: "fft" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fft2d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "fft3d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ifft" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ifft2d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ifft3d" + argspec: "args=[\'input\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "irfft" + argspec: "args=[\'input_tensor\', \'fft_length\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "irfft2d" + argspec: "args=[\'input_tensor\', \'fft_length\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "irfft3d" + argspec: "args=[\'input_tensor\', \'fft_length\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "rfft" + argspec: "args=[\'input_tensor\', \'fft_length\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "rfft2d" + argspec: "args=[\'input_tensor\', \'fft_length\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "rfft3d" + argspec: "args=[\'input_tensor\', \'fft_length\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-event.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-event.pbtxt new file mode 100644 index 00000000000..ab3449d80f6 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-event.pbtxt @@ -0,0 +1,112 @@ +path: "tensorflow.summary.Event" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FILE_VERSION_FIELD_NUMBER" + mtype: "" + } + member { + name: "GRAPH_DEF_FIELD_NUMBER" + mtype: "" + } + member { + name: "LOG_MESSAGE_FIELD_NUMBER" + mtype: "" + } + member { + name: "META_GRAPH_DEF_FIELD_NUMBER" + mtype: "" + } + member { + name: "SESSION_LOG_FIELD_NUMBER" + mtype: "" + } + member { + name: "STEP_FIELD_NUMBER" + mtype: "" + } + member { + name: "SUMMARY_FIELD_NUMBER" + mtype: "" + } + member { + name: "TAGGED_RUN_METADATA_FIELD_NUMBER" + mtype: "" + } + member { + name: "WALL_TIME_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-file-writer-cache.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-file-writer-cache.pbtxt new file mode 100644 index 00000000000..2a5b63dceae --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-file-writer-cache.pbtxt @@ -0,0 +1,16 @@ +path: "tensorflow.summary.FileWriterCache" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "clear" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get" + argspec: "args=[\'logdir\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-file-writer.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-file-writer.pbtxt new file mode 100644 index 00000000000..502c35ee7bb --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-file-writer.pbtxt @@ -0,0 +1,50 @@ +path: "tensorflow.summary.FileWriter" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'logdir\', \'graph\', \'max_queue\', \'flush_secs\', \'graph_def\'], varargs=None, keywords=None, defaults=[\'None\', \'10\', \'120\', \'None\'], " + } + member_method { + name: "add_event" + argspec: "args=[\'self\', \'event\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "add_graph" + argspec: "args=[\'self\', \'graph\', \'global_step\', \'graph_def\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "add_meta_graph" + argspec: "args=[\'self\', \'meta_graph_def\', \'global_step\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "add_run_metadata" + argspec: "args=[\'self\', \'run_metadata\', \'tag\', \'global_step\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "add_session_log" + argspec: "args=[\'self\', \'session_log\', \'global_step\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "add_summary" + argspec: "args=[\'self\', \'summary\', \'global_step\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "flush" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_logdir" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "reopen" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-session-log.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-session-log.pbtxt new file mode 100644 index 00000000000..92ca4872caf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-session-log.pbtxt @@ -0,0 +1,108 @@ +path: "tensorflow.summary.SessionLog" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CHECKPOINT" + mtype: "" + } + member { + name: "CHECKPOINT_PATH_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "MSG_FIELD_NUMBER" + mtype: "" + } + member { + name: "START" + mtype: "" + } + member { + name: "STATUS_FIELD_NUMBER" + mtype: "" + } + member { + name: "STATUS_UNSPECIFIED" + mtype: "" + } + member { + name: "STOP" + mtype: "" + } + member { + name: "SessionStatus" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-summary-description.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-summary-description.pbtxt new file mode 100644 index 00000000000..f93da2196ad --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-summary-description.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.summary.SummaryDescription" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "TYPE_HINT_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-summary.-audio.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-summary.-audio.pbtxt new file mode 100644 index 00000000000..605e305e82c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-summary.-audio.pbtxt @@ -0,0 +1,96 @@ +path: "tensorflow.summary.Summary.Audio" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CONTENT_TYPE_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ENCODED_AUDIO_STRING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "LENGTH_FRAMES_FIELD_NUMBER" + mtype: "" + } + member { + name: "NUM_CHANNELS_FIELD_NUMBER" + mtype: "" + } + member { + name: "SAMPLE_RATE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-summary.-image.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-summary.-image.pbtxt new file mode 100644 index 00000000000..0646972196d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-summary.-image.pbtxt @@ -0,0 +1,92 @@ +path: "tensorflow.summary.Summary.Image" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "COLORSPACE_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "ENCODED_IMAGE_STRING_FIELD_NUMBER" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "HEIGHT_FIELD_NUMBER" + mtype: "" + } + member { + name: "WIDTH_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-summary.-value.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-summary.-value.pbtxt new file mode 100644 index 00000000000..5294b37f577 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-summary.-value.pbtxt @@ -0,0 +1,108 @@ +path: "tensorflow.summary.Summary.Value" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "AUDIO_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "HISTO_FIELD_NUMBER" + mtype: "" + } + member { + name: "IMAGE_FIELD_NUMBER" + mtype: "" + } + member { + name: "NODE_NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "OBSOLETE_OLD_STYLE_HISTOGRAM_FIELD_NUMBER" + mtype: "" + } + member { + name: "SIMPLE_VALUE_FIELD_NUMBER" + mtype: "" + } + member { + name: "TAG_FIELD_NUMBER" + mtype: "" + } + member { + name: "TENSOR_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-summary.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-summary.pbtxt new file mode 100644 index 00000000000..132ef1b7d2e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-summary.pbtxt @@ -0,0 +1,92 @@ +path: "tensorflow.summary.Summary" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "Audio" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "Image" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member { + name: "Value" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.-tagged-run-metadata.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.-tagged-run-metadata.pbtxt new file mode 100644 index 00000000000..4dce20819de --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.-tagged-run-metadata.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.summary.TaggedRunMetadata" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "RUN_METADATA_FIELD_NUMBER" + mtype: "" + } + member { + name: "TAG_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.summary.pbtxt b/tensorflow/tools/api/golden/tensorflow.summary.pbtxt new file mode 100644 index 00000000000..c3d0bea10cb --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.summary.pbtxt @@ -0,0 +1,67 @@ +path: "tensorflow.summary" +tf_module { + member { + name: "Event" + mtype: "" + } + member { + name: "FileWriter" + mtype: "" + } + member { + name: "FileWriterCache" + mtype: "" + } + member { + name: "SessionLog" + mtype: "" + } + member { + name: "Summary" + mtype: "" + } + member { + name: "SummaryDescription" + mtype: "" + } + member { + name: "TaggedRunMetadata" + mtype: "" + } + member_method { + name: "audio" + argspec: "args=[\'name\', \'tensor\', \'sample_rate\', \'max_outputs\', \'collections\'], varargs=None, keywords=None, defaults=[\'3\', \'None\'], " + } + member_method { + name: "get_summary_description" + argspec: "args=[\'node_def\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "histogram" + argspec: "args=[\'name\', \'values\', \'collections\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "image" + argspec: "args=[\'name\', \'tensor\', \'max_outputs\', \'collections\'], varargs=None, keywords=None, defaults=[\'3\', \'None\'], " + } + member_method { + name: "merge" + argspec: "args=[\'inputs\', \'collections\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "merge_all" + argspec: "args=[\'key\'], varargs=None, keywords=None, defaults=[\'summaries\'], " + } + member_method { + name: "scalar" + argspec: "args=[\'name\', \'tensor\', \'collections\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "tensor_summary" + argspec: "args=[\'name\', \'tensor\', \'summary_description\', \'collections\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "text" + argspec: "args=[\'name\', \'tensor\', \'collections\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.sysconfig.pbtxt b/tensorflow/tools/api/golden/tensorflow.sysconfig.pbtxt new file mode 100644 index 00000000000..02dec04b9cc --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.sysconfig.pbtxt @@ -0,0 +1,11 @@ +path: "tensorflow.sysconfig" +tf_module { + member_method { + name: "get_include" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_lib" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.test.-benchmark.pbtxt b/tensorflow/tools/api/golden/tensorflow.test.-benchmark.pbtxt new file mode 100644 index 00000000000..df528e26b60 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.test.-benchmark.pbtxt @@ -0,0 +1,21 @@ +path: "tensorflow.test.Benchmark" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "is_abstract" + argspec: "args=[\'cls\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "report_benchmark" + argspec: "args=[\'self\', \'iters\', \'cpu_time\', \'wall_time\', \'throughput\', \'extras\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "run_op_benchmark" + argspec: "args=[\'self\', \'sess\', \'op_or_tensor\', \'feed_dict\', \'burn_iters\', \'min_iters\', \'store_trace\', \'store_memory_usage\', \'name\', \'extras\', \'mbs\'], varargs=None, keywords=None, defaults=[\'None\', \'2\', \'10\', \'False\', \'True\', \'None\', \'None\', \'0\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.test.pbtxt b/tensorflow/tools/api/golden/tensorflow.test.pbtxt new file mode 100644 index 00000000000..c4768a68bfb --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.test.pbtxt @@ -0,0 +1,51 @@ +path: "tensorflow.test" +tf_module { + member { + name: "Benchmark" + mtype: "" + } + member { + name: "TestCase" + mtype: "" + } + member { + name: "mock" + mtype: "" + } + member_method { + name: "assert_equal_graph_def" + argspec: "args=[\'actual\', \'expected\', \'checkpoint_v2\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "compute_gradient" + argspec: "args=[\'x\', \'x_shape\', \'y\', \'y_shape\', \'x_init_value\', \'delta\', \'init_targets\', \'extra_feed_dict\'], varargs=None, keywords=None, defaults=[\'None\', \'0.001\', \'None\', \'None\'], " + } + member_method { + name: "compute_gradient_error" + argspec: "args=[\'x\', \'x_shape\', \'y\', \'y_shape\', \'x_init_value\', \'delta\', \'init_targets\', \'extra_feed_dict\'], varargs=None, keywords=None, defaults=[\'None\', \'0.001\', \'None\', \'None\'], " + } + member_method { + name: "get_temp_dir" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "gpu_device_name" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_built_with_cuda" + argspec: "args=[], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_gpu_available" + argspec: "args=[\'cuda_only\'], varargs=None, keywords=None, defaults=[\'False\'], " + } + member_method { + name: "main" + argspec: "args=[\'argv\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "test_src_dir_path" + argspec: "args=[\'relative_path\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-adadelta-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-adadelta-optimizer.pbtxt new file mode 100644 index 00000000000..8c91c5b4d9e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-adadelta-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.AdadeltaOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'rho\', \'epsilon\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'0.001\', \'0.95\', \'1e-08\', \'False\', \'Adadelta\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-adagrad-d-a-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-adagrad-d-a-optimizer.pbtxt new file mode 100644 index 00000000000..05d38d62ccd --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-adagrad-d-a-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.AdagradDAOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'global_step\', \'initial_gradient_squared_accumulator_value\', \'l1_regularization_strength\', \'l2_regularization_strength\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'0.1\', \'0.0\', \'0.0\', \'False\', \'AdagradDA\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-adagrad-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-adagrad-optimizer.pbtxt new file mode 100644 index 00000000000..19ca9f57637 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-adagrad-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.AdagradOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'initial_accumulator_value\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'0.1\', \'False\', \'Adagrad\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-adam-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-adam-optimizer.pbtxt new file mode 100644 index 00000000000..c8144e2db78 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-adam-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.AdamOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'beta1\', \'beta2\', \'epsilon\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'0.001\', \'0.9\', \'0.999\', \'1e-08\', \'False\', \'Adam\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-bytes-list.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-bytes-list.pbtxt new file mode 100644 index 00000000000..8cf52b817f3 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-bytes-list.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.train.BytesList" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-hook.pbtxt new file mode 100644 index 00000000000..c3037baa8c9 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.CheckpointSaverHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'checkpoint_dir\', \'save_secs\', \'save_steps\', \'saver\', \'checkpoint_basename\', \'scaffold\', \'listeners\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'model.ckpt\', \'None\', \'None\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-listener.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-listener.pbtxt new file mode 100644 index 00000000000..9d3688e5657 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-checkpoint-saver-listener.pbtxt @@ -0,0 +1,24 @@ +path: "tensorflow.train.CheckpointSaverListener" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "after_save" + argspec: "args=[\'self\', \'session\', \'global_step_value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_save" + argspec: "args=[\'self\', \'session\', \'global_step_value\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\', \'global_step_value\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-chief-session-creator.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-chief-session-creator.pbtxt new file mode 100644 index 00000000000..abbe273be32 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-chief-session-creator.pbtxt @@ -0,0 +1,14 @@ +path: "tensorflow.train.ChiefSessionCreator" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'scaffold\', \'master\', \'config\', \'checkpoint_dir\', \'checkpoint_filename_with_path\'], varargs=None, keywords=None, defaults=[\'None\', \'\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "create_session" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-cluster-def.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-cluster-def.pbtxt new file mode 100644 index 00000000000..feb73bd7d4f --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-cluster-def.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.train.ClusterDef" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "JOB_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-cluster-spec.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-cluster-spec.pbtxt new file mode 100644 index 00000000000..1658b15a5f8 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-cluster-spec.pbtxt @@ -0,0 +1,37 @@ +path: "tensorflow.train.ClusterSpec" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "jobs" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'cluster\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_cluster_def" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "as_dict" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "job_tasks" + argspec: "args=[\'self\', \'job_name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "num_tasks" + argspec: "args=[\'self\', \'job_name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "task_address" + argspec: "args=[\'self\', \'job_name\', \'task_index\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "task_indices" + argspec: "args=[\'self\', \'job_name\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-coordinator.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-coordinator.pbtxt new file mode 100644 index 00000000000..11277f077ee --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-coordinator.pbtxt @@ -0,0 +1,45 @@ +path: "tensorflow.train.Coordinator" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "joined" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'clean_stop_exception_types\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "clear_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "join" + argspec: "args=[\'self\', \'threads\', \'stop_grace_period_secs\', \'ignore_live_threads\'], varargs=None, keywords=None, defaults=[\'None\', \'120\', \'False\'], " + } + member_method { + name: "raise_requested_exception" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "register_thread" + argspec: "args=[\'self\', \'thread\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "request_stop" + argspec: "args=[\'self\', \'ex\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "should_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "stop_on_exception" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "wait_for_stop" + argspec: "args=[\'self\', \'timeout\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-example.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-example.pbtxt new file mode 100644 index 00000000000..f7215a20372 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-example.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.train.Example" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FEATURES_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-exponential-moving-average.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-exponential-moving-average.pbtxt new file mode 100644 index 00000000000..737acbe07c9 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-exponential-moving-average.pbtxt @@ -0,0 +1,25 @@ +path: "tensorflow.train.ExponentialMovingAverage" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'decay\', \'num_updates\', \'zero_debias\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'ExponentialMovingAverage\'], " + } + member_method { + name: "apply" + argspec: "args=[\'self\', \'var_list\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "average" + argspec: "args=[\'self\', \'var\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "average_name" + argspec: "args=[\'self\', \'var\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "variables_to_restore" + argspec: "args=[\'self\', \'moving_avg_variables\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-feature-list.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-feature-list.pbtxt new file mode 100644 index 00000000000..3ad98354d69 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-feature-list.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.train.FeatureList" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FEATURE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-feature-lists.-feature-list-entry.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-feature-lists.-feature-list-entry.pbtxt new file mode 100644 index 00000000000..cd171f4ca3e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-feature-lists.-feature-list-entry.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.train.FeatureLists.FeatureListEntry" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "KEY_FIELD_NUMBER" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-feature-lists.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-feature-lists.pbtxt new file mode 100644 index 00000000000..3d95017d584 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-feature-lists.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.train.FeatureLists" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FEATURE_LIST_FIELD_NUMBER" + mtype: "" + } + member { + name: "FeatureListEntry" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-feature.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-feature.pbtxt new file mode 100644 index 00000000000..9cca132bba9 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-feature.pbtxt @@ -0,0 +1,88 @@ +path: "tensorflow.train.Feature" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "BYTES_LIST_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FLOAT_LIST_FIELD_NUMBER" + mtype: "" + } + member { + name: "INT64_LIST_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-features.-feature-entry.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-features.-feature-entry.pbtxt new file mode 100644 index 00000000000..858aee03415 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-features.-feature-entry.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.train.Features.FeatureEntry" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "KEY_FIELD_NUMBER" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-features.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-features.pbtxt new file mode 100644 index 00000000000..49cd12153bf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-features.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.train.Features" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FEATURE_FIELD_NUMBER" + mtype: "" + } + member { + name: "FeatureEntry" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-feed-fn-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-feed-fn-hook.pbtxt new file mode 100644 index 00000000000..7bec4d032ce --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-feed-fn-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.FeedFnHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'feed_fn\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-final-ops-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-final-ops-hook.pbtxt new file mode 100644 index 00000000000..31cf9aaeb2c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-final-ops-hook.pbtxt @@ -0,0 +1,34 @@ +path: "tensorflow.train.FinalOpsHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "final_ops_values" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'final_ops\', \'final_ops_feed_dict\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-float-list.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-float-list.pbtxt new file mode 100644 index 00000000000..e3f01334b54 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-float-list.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.train.FloatList" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-ftrl-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-ftrl-optimizer.pbtxt new file mode 100644 index 00000000000..0252474a1d5 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-ftrl-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.FtrlOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'learning_rate_power\', \'initial_accumulator_value\', \'l1_regularization_strength\', \'l2_regularization_strength\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'-0.5\', \'0.1\', \'0.0\', \'0.0\', \'False\', \'Ftrl\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-global-step-waiter-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-global-step-waiter-hook.pbtxt new file mode 100644 index 00000000000..147448618e2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-global-step-waiter-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.GlobalStepWaiterHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'wait_until_step\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-gradient-descent-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-gradient-descent-optimizer.pbtxt new file mode 100644 index 00000000000..bdd4c525685 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-gradient-descent-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.GradientDescentOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'GradientDescent\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-int64-list.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-int64-list.pbtxt new file mode 100644 index 00000000000..8917dc122cf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-int64-list.pbtxt @@ -0,0 +1,80 @@ +path: "tensorflow.train.Int64List" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-job-def.-tasks-entry.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-job-def.-tasks-entry.pbtxt new file mode 100644 index 00000000000..2d7fcbe5456 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-job-def.-tasks-entry.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.train.JobDef.TasksEntry" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "KEY_FIELD_NUMBER" + mtype: "" + } + member { + name: "VALUE_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-job-def.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-job-def.pbtxt new file mode 100644 index 00000000000..fc5b76341d2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-job-def.pbtxt @@ -0,0 +1,88 @@ +path: "tensorflow.train.JobDef" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "TASKS_FIELD_NUMBER" + mtype: "" + } + member { + name: "TasksEntry" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-logging-tensor-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-logging-tensor-hook.pbtxt new file mode 100644 index 00000000000..e55c47b3567 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-logging-tensor-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.LoggingTensorHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'tensors\', \'every_n_iter\', \'every_n_secs\', \'formatter\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-looper-thread.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-looper-thread.pbtxt new file mode 100644 index 00000000000..c61859004e8 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-looper-thread.pbtxt @@ -0,0 +1,73 @@ +path: "tensorflow.train.LooperThread" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "daemon" + mtype: "" + } + member { + name: "ident" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'coord\', \'timer_interval_secs\', \'target\', \'args\', \'kwargs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "getName" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "isAlive" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "isDaemon" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "is_alive" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "join" + argspec: "args=[\'self\', \'timeout\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "loop" + argspec: "args=[\'coord\', \'timer_interval_secs\', \'target\', \'args\', \'kwargs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "run" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "run_loop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "setDaemon" + argspec: "args=[\'self\', \'daemonic\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "setName" + argspec: "args=[\'self\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "start" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "start_loop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "stop_loop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-momentum-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-momentum-optimizer.pbtxt new file mode 100644 index 00000000000..7cf5488a15e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-momentum-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.MomentumOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'momentum\', \'use_locking\', \'name\', \'use_nesterov\'], varargs=None, keywords=None, defaults=[\'False\', \'Momentum\', \'False\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-monitored-session.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-monitored-session.pbtxt new file mode 100644 index 00000000000..3a5cc015b4d --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-monitored-session.pbtxt @@ -0,0 +1,26 @@ +path: "tensorflow.train.MonitoredSession" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "graph" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'session_creator\', \'hooks\', \'stop_grace_period_secs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'120\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "run" + argspec: "args=[\'self\', \'fetches\', \'feed_dict\', \'options\', \'run_metadata\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "should_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-nan-loss-during-training-error.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-nan-loss-during-training-error.pbtxt new file mode 100644 index 00000000000..25fd5e75a79 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-nan-loss-during-training-error.pbtxt @@ -0,0 +1,16 @@ +path: "tensorflow.train.NanLossDuringTrainingError" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "args" + mtype: "" + } + member { + name: "message" + mtype: "" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-nan-tensor-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-nan-tensor-hook.pbtxt new file mode 100644 index 00000000000..7d1c89f9b37 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-nan-tensor-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.NanTensorHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'loss_tensor\', \'fail_on_nan_loss\'], varargs=None, keywords=None, defaults=[\'True\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-optimizer.pbtxt new file mode 100644 index 00000000000..20b0c4d1b56 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-optimizer.pbtxt @@ -0,0 +1,45 @@ +path: "tensorflow.train.Optimizer" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-proximal-adagrad-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-proximal-adagrad-optimizer.pbtxt new file mode 100644 index 00000000000..571d846b6c5 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-proximal-adagrad-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.ProximalAdagradOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'initial_accumulator_value\', \'l1_regularization_strength\', \'l2_regularization_strength\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'0.1\', \'0.0\', \'0.0\', \'False\', \'ProximalAdagrad\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-proximal-gradient-descent-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-proximal-gradient-descent-optimizer.pbtxt new file mode 100644 index 00000000000..1feb136e7f7 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-proximal-gradient-descent-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.ProximalGradientDescentOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'l1_regularization_strength\', \'l2_regularization_strength\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'0.0\', \'0.0\', \'False\', \'ProximalGradientDescent\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-queue-runner.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-queue-runner.pbtxt new file mode 100644 index 00000000000..d84d0058eea --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-queue-runner.pbtxt @@ -0,0 +1,49 @@ +path: "tensorflow.train.QueueRunner" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "cancel_op" + mtype: "" + } + member { + name: "close_op" + mtype: "" + } + member { + name: "enqueue_ops" + mtype: "" + } + member { + name: "exceptions_raised" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "queue" + mtype: "" + } + member { + name: "queue_closed_exception_types" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'queue\', \'enqueue_ops\', \'close_op\', \'cancel_op\', \'queue_closed_exception_types\', \'queue_runner_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "create_threads" + argspec: "args=[\'self\', \'sess\', \'coord\', \'daemon\', \'start\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'False\'], " + } + member_method { + name: "from_proto" + argspec: "args=[\'queue_runner_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "to_proto" + argspec: "args=[\'self\', \'export_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-r-m-s-prop-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-r-m-s-prop-optimizer.pbtxt new file mode 100644 index 00000000000..2aa4ae6d2d2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-r-m-s-prop-optimizer.pbtxt @@ -0,0 +1,46 @@ +path: "tensorflow.train.RMSPropOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'learning_rate\', \'decay\', \'momentum\', \'epsilon\', \'use_locking\', \'centered\', \'name\'], varargs=None, keywords=None, defaults=[\'0.9\', \'0.0\', \'1e-10\', \'False\', \'False\', \'RMSProp\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\', \'loss\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'1\', \'None\', \'False\', \'None\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\', \'var\', \'name\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-saver-def.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-saver-def.pbtxt new file mode 100644 index 00000000000..84498a64f5b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-saver-def.pbtxt @@ -0,0 +1,120 @@ +path: "tensorflow.train.SaverDef" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CheckpointFormatVersion" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FILENAME_TENSOR_NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "KEEP_CHECKPOINT_EVERY_N_HOURS_FIELD_NUMBER" + mtype: "" + } + member { + name: "LEGACY" + mtype: "" + } + member { + name: "MAX_TO_KEEP_FIELD_NUMBER" + mtype: "" + } + member { + name: "RESTORE_OP_NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "SAVE_TENSOR_NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "SHARDED_FIELD_NUMBER" + mtype: "" + } + member { + name: "V1" + mtype: "" + } + member { + name: "V2" + mtype: "" + } + member { + name: "VERSION_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-saver.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-saver.pbtxt new file mode 100644 index 00000000000..7494fe1cc84 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-saver.pbtxt @@ -0,0 +1,53 @@ +path: "tensorflow.train.Saver" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "last_checkpoints" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'var_list\', \'reshape\', \'sharded\', \'max_to_keep\', \'keep_checkpoint_every_n_hours\', \'name\', \'restore_sequentially\', \'saver_def\', \'builder\', \'defer_build\', \'allow_empty\', \'write_version\', \'pad_step_number\', \'save_relative_paths\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'False\', \'5\', \'10000.0\', \'None\', \'False\', \'None\', \'None\', \'False\', \'False\', \'2\', \'False\', \'False\'], " + } + member_method { + name: "as_saver_def" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "build" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "export_meta_graph" + argspec: "args=[\'self\', \'filename\', \'collection_list\', \'as_text\', \'export_scope\', \'clear_devices\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'False\', \'None\', \'False\'], " + } + member_method { + name: "from_proto" + argspec: "args=[\'saver_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "recover_last_checkpoints" + argspec: "args=[\'self\', \'checkpoint_paths\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "restore" + argspec: "args=[\'self\', \'sess\', \'save_path\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "save" + argspec: "args=[\'self\', \'sess\', \'save_path\', \'global_step\', \'latest_filename\', \'meta_graph_suffix\', \'write_meta_graph\', \'write_state\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'meta\', \'True\', \'True\'], " + } + member_method { + name: "set_last_checkpoints" + argspec: "args=[\'self\', \'last_checkpoints\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "set_last_checkpoints_with_time" + argspec: "args=[\'self\', \'last_checkpoints_with_time\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "to_proto" + argspec: "args=[\'self\', \'export_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-scaffold.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-scaffold.pbtxt new file mode 100644 index 00000000000..21234fe7391 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-scaffold.pbtxt @@ -0,0 +1,49 @@ +path: "tensorflow.train.Scaffold" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "init_feed_dict" + mtype: "" + } + member { + name: "init_fn" + mtype: "" + } + member { + name: "init_op" + mtype: "" + } + member { + name: "local_init_op" + mtype: "" + } + member { + name: "ready_for_local_init_op" + mtype: "" + } + member { + name: "ready_op" + mtype: "" + } + member { + name: "saver" + mtype: "" + } + member { + name: "summary_op" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'init_op\', \'init_feed_dict\', \'init_fn\', \'ready_op\', \'ready_for_local_init_op\', \'local_init_op\', \'summary_op\', \'saver\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "finalize" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_or_default" + argspec: "args=[\'arg_name\', \'collection_key\', \'default_constructor\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-second-or-step-timer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-second-or-step-timer.pbtxt new file mode 100644 index 00000000000..45528d4e87e --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-second-or-step-timer.pbtxt @@ -0,0 +1,21 @@ +path: "tensorflow.train.SecondOrStepTimer" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'every_secs\', \'every_steps\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "last_triggered_step" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "should_trigger_for_step" + argspec: "args=[\'self\', \'step\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "update_last_triggered_step" + argspec: "args=[\'self\', \'step\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-sequence-example.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-sequence-example.pbtxt new file mode 100644 index 00000000000..9ab95537021 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-sequence-example.pbtxt @@ -0,0 +1,84 @@ +path: "tensorflow.train.SequenceExample" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CONTEXT_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "FEATURE_LISTS_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-server-def.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-server-def.pbtxt new file mode 100644 index 00000000000..af0a3b73cc2 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-server-def.pbtxt @@ -0,0 +1,96 @@ +path: "tensorflow.train.ServerDef" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "CLUSTER_FIELD_NUMBER" + mtype: "" + } + member { + name: "DEFAULT_SESSION_CONFIG_FIELD_NUMBER" + mtype: "" + } + member { + name: "DESCRIPTOR" + mtype: "" + } + member { + name: "Extensions" + mtype: "" + } + member { + name: "JOB_NAME_FIELD_NUMBER" + mtype: "" + } + member { + name: "PROTOCOL_FIELD_NUMBER" + mtype: "" + } + member { + name: "TASK_INDEX_FIELD_NUMBER" + mtype: "" + } + member_method { + name: "ByteSize" + } + member_method { + name: "Clear" + } + member_method { + name: "ClearExtension" + } + member_method { + name: "ClearField" + } + member_method { + name: "CopyFrom" + } + member_method { + name: "DiscardUnknownFields" + } + member_method { + name: "FindInitializationErrors" + } + member_method { + name: "FromString" + } + member_method { + name: "HasExtension" + } + member_method { + name: "HasField" + } + member_method { + name: "IsInitialized" + } + member_method { + name: "ListFields" + } + member_method { + name: "MergeFrom" + } + member_method { + name: "MergeFromString" + } + member_method { + name: "ParseFromString" + } + member_method { + name: "RegisterExtension" + } + member_method { + name: "SerializePartialToString" + } + member_method { + name: "SerializeToString" + } + member_method { + name: "SetInParent" + } + member_method { + name: "WhichOneof" + } + member_method { + name: "__init__" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-server.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-server.pbtxt new file mode 100644 index 00000000000..9b8f185f5b6 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-server.pbtxt @@ -0,0 +1,29 @@ +path: "tensorflow.train.Server" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "server_def" + mtype: "" + } + member { + name: "target" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'server_or_cluster_def\', \'job_name\', \'task_index\', \'protocol\', \'config\', \'start\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'True\'], " + } + member_method { + name: "create_local_server" + argspec: "args=[\'config\', \'start\'], varargs=None, keywords=None, defaults=[\'None\', \'True\'], " + } + member_method { + name: "join" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "start" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-session-creator.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-session-creator.pbtxt new file mode 100644 index 00000000000..beb232715f7 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-session-creator.pbtxt @@ -0,0 +1,12 @@ +path: "tensorflow.train.SessionCreator" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "create_session" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-session-manager.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-session-manager.pbtxt new file mode 100644 index 00000000000..cc31bb4e4b3 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-session-manager.pbtxt @@ -0,0 +1,21 @@ +path: "tensorflow.train.SessionManager" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'local_init_op\', \'ready_op\', \'ready_for_local_init_op\', \'graph\', \'recovery_wait_secs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'30\'], " + } + member_method { + name: "prepare_session" + argspec: "args=[\'self\', \'master\', \'init_op\', \'saver\', \'checkpoint_dir\', \'checkpoint_filename_with_path\', \'wait_for_checkpoint\', \'max_wait_secs\', \'config\', \'init_feed_dict\', \'init_fn\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'False\', \'7200\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "recover_session" + argspec: "args=[\'self\', \'master\', \'saver\', \'checkpoint_dir\', \'checkpoint_filename_with_path\', \'wait_for_checkpoint\', \'max_wait_secs\', \'config\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'False\', \'7200\', \'None\'], " + } + member_method { + name: "wait_for_session" + argspec: "args=[\'self\', \'master\', \'config\', \'max_wait_secs\'], varargs=None, keywords=None, defaults=[\'None\', \'inf\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-session-run-args.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-session-run-args.pbtxt new file mode 100644 index 00000000000..442990893e3 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-session-run-args.pbtxt @@ -0,0 +1,27 @@ +path: "tensorflow.train.SessionRunArgs" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "feed_dict" + mtype: "" + } + member { + name: "fetches" + mtype: "" + } + member { + name: "options" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-session-run-context.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-session-run-context.pbtxt new file mode 100644 index 00000000000..d5adb15c95f --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-session-run-context.pbtxt @@ -0,0 +1,25 @@ +path: "tensorflow.train.SessionRunContext" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "original_args" + mtype: "" + } + member { + name: "session" + mtype: "" + } + member { + name: "stop_requested" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'original_args\', \'session\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "request_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-session-run-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-session-run-hook.pbtxt new file mode 100644 index 00000000000..db1aa24acf0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-session-run-hook.pbtxt @@ -0,0 +1,28 @@ +path: "tensorflow.train.SessionRunHook" +tf_class { + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-session-run-values.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-session-run-values.pbtxt new file mode 100644 index 00000000000..0b401d59c40 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-session-run-values.pbtxt @@ -0,0 +1,27 @@ +path: "tensorflow.train.SessionRunValues" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "options" + mtype: "" + } + member { + name: "results" + mtype: "" + } + member { + name: "run_metadata" + mtype: "" + } + member_method { + name: "__init__" + } + member_method { + name: "count" + } + member_method { + name: "index" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-singular-monitored-session.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-singular-monitored-session.pbtxt new file mode 100644 index 00000000000..62bfdab40bb --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-singular-monitored-session.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.SingularMonitoredSession" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "graph" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'hooks\', \'scaffold\', \'master\', \'config\', \'checkpoint_dir\', \'stop_grace_period_secs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'\', \'None\', \'None\', \'120\'], " + } + member_method { + name: "close" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "raw_session" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "run" + argspec: "args=[\'self\', \'fetches\', \'feed_dict\', \'options\', \'run_metadata\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\'], " + } + member_method { + name: "should_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-step-counter-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-step-counter-hook.pbtxt new file mode 100644 index 00000000000..13261f6dde1 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-step-counter-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.StepCounterHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'every_n_steps\', \'every_n_secs\', \'output_dir\', \'summary_writer\'], varargs=None, keywords=None, defaults=[\'100\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-stop-at-step-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-stop-at-step-hook.pbtxt new file mode 100644 index 00000000000..e388599b0bf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-stop-at-step-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.StopAtStepHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'num_steps\', \'last_step\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-summary-saver-hook.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-summary-saver-hook.pbtxt new file mode 100644 index 00000000000..697c3667b09 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-summary-saver-hook.pbtxt @@ -0,0 +1,30 @@ +path: "tensorflow.train.SummarySaverHook" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'save_steps\', \'save_secs\', \'output_dir\', \'summary_writer\', \'scaffold\', \'summary_op\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "after_create_session" + argspec: "args=[\'self\', \'session\', \'coord\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "after_run" + argspec: "args=[\'self\', \'run_context\', \'run_values\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "before_run" + argspec: "args=[\'self\', \'run_context\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "begin" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "end" + argspec: "args=[\'self\', \'session\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-supervisor.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-supervisor.pbtxt new file mode 100644 index 00000000000..cc9bd5c136b --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-supervisor.pbtxt @@ -0,0 +1,153 @@ +path: "tensorflow.train.Supervisor" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "USE_DEFAULT" + mtype: "" + } + member { + name: "coord" + mtype: "" + } + member { + name: "global_step" + mtype: "" + } + member { + name: "init_feed_dict" + mtype: "" + } + member { + name: "init_op" + mtype: "" + } + member { + name: "is_chief" + mtype: "" + } + member { + name: "ready_for_local_init_op" + mtype: "" + } + member { + name: "ready_op" + mtype: "" + } + member { + name: "save_model_secs" + mtype: "" + } + member { + name: "save_path" + mtype: "" + } + member { + name: "save_summaries_secs" + mtype: "" + } + member { + name: "saver" + mtype: "" + } + member { + name: "session_manager" + mtype: "" + } + member { + name: "summary_op" + mtype: "" + } + member { + name: "summary_writer" + mtype: "" + } + member_method { + name: "Loop" + argspec: "args=[\'self\', \'timer_interval_secs\', \'target\', \'args\', \'kwargs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "PrepareSession" + argspec: "args=[\'self\', \'master\', \'config\', \'wait_for_checkpoint\', \'max_wait_secs\', \'start_standard_services\'], varargs=None, keywords=None, defaults=[\'\', \'None\', \'False\', \'7200\', \'True\'], " + } + member_method { + name: "RequestStop" + argspec: "args=[\'self\', \'ex\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "ShouldStop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "StartQueueRunners" + argspec: "args=[\'self\', \'sess\', \'queue_runners\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "StartStandardServices" + argspec: "args=[\'self\', \'sess\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "Stop" + argspec: "args=[\'self\', \'threads\', \'close_summary_writer\'], varargs=None, keywords=None, defaults=[\'None\', \'True\'], " + } + member_method { + name: "StopOnException" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "SummaryComputed" + argspec: "args=[\'self\', \'sess\', \'summary\', \'global_step\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "WaitForStop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'graph\', \'ready_op\', \'ready_for_local_init_op\', \'is_chief\', \'init_op\', \'init_feed_dict\', \'local_init_op\', \'logdir\', \'summary_op\', \'saver\', \'global_step\', \'save_summaries_secs\', \'save_model_secs\', \'recovery_wait_secs\', \'stop_grace_secs\', \'checkpoint_basename\', \'session_manager\', \'summary_writer\', \'init_fn\'], varargs=None, keywords=None, defaults=[\'None\', \'0\', \'0\', \'True\', \'0\', \'None\', \'0\', \'None\', \'0\', \'0\', \'0\', \'120\', \'600\', \'30\', \'120\', \'model.ckpt\', \'None\', \'0\', \'None\'], " + } + member_method { + name: "loop" + argspec: "args=[\'self\', \'timer_interval_secs\', \'target\', \'args\', \'kwargs\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "managed_session" + argspec: "args=[], varargs=args, keywords=kwds, defaults=None" + } + member_method { + name: "prepare_or_wait_for_session" + argspec: "args=[\'self\', \'master\', \'config\', \'wait_for_checkpoint\', \'max_wait_secs\', \'start_standard_services\'], varargs=None, keywords=None, defaults=[\'\', \'None\', \'False\', \'7200\', \'True\'], " + } + member_method { + name: "request_stop" + argspec: "args=[\'self\', \'ex\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "should_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "start_queue_runners" + argspec: "args=[\'self\', \'sess\', \'queue_runners\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "start_standard_services" + argspec: "args=[\'self\', \'sess\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "stop" + argspec: "args=[\'self\', \'threads\', \'close_summary_writer\'], varargs=None, keywords=None, defaults=[\'None\', \'True\'], " + } + member_method { + name: "stop_on_exception" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "summary_computed" + argspec: "args=[\'self\', \'sess\', \'summary\', \'global_step\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "wait_for_stop" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-sync-replicas-optimizer.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-sync-replicas-optimizer.pbtxt new file mode 100644 index 00000000000..915d8501af0 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-sync-replicas-optimizer.pbtxt @@ -0,0 +1,58 @@ +path: "tensorflow.train.SyncReplicasOptimizer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member { + name: "GATE_GRAPH" + mtype: "" + } + member { + name: "GATE_NONE" + mtype: "" + } + member { + name: "GATE_OP" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'opt\', \'replicas_to_aggregate\', \'total_num_replicas\', \'variable_averages\', \'variables_to_average\', \'use_locking\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'False\', \'sync_replicas\'], " + } + member_method { + name: "apply_gradients" + argspec: "args=[\'self\', \'grads_and_vars\', \'global_step\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "compute_gradients" + argspec: "args=[\'self\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "get_chief_queue_runner" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_init_tokens_op" + argspec: "args=[\'self\', \'num_tokens\'], varargs=None, keywords=None, defaults=[\'-1\'], " + } + member_method { + name: "get_name" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_slot" + argspec: "args=[\'self\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "get_slot_names" + argspec: "args=[\'self\'], varargs=args, keywords=kwargs, defaults=None" + } + member_method { + name: "make_session_run_hook" + argspec: "args=[\'self\', \'is_chief\', \'num_tokens\'], varargs=None, keywords=None, defaults=[\'-1\'], " + } + member_method { + name: "minimize" + argspec: "args=[\'self\', \'loss\', \'global_step\', \'var_list\', \'gate_gradients\', \'aggregation_method\', \'colocate_gradients_with_ops\', \'name\', \'grad_loss\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'1\', \'None\', \'False\', \'None\', \'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.-worker-session-creator.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.-worker-session-creator.pbtxt new file mode 100644 index 00000000000..140407651a9 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.-worker-session-creator.pbtxt @@ -0,0 +1,14 @@ +path: "tensorflow.train.WorkerSessionCreator" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'scaffold\', \'master\', \'config\'], varargs=None, keywords=None, defaults=[\'None\', \'\', \'None\'], " + } + member_method { + name: "create_session" + argspec: "args=[\'self\'], varargs=None, keywords=None, defaults=None" + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.pbtxt new file mode 100644 index 00000000000..79fd49d0adf --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.pbtxt @@ -0,0 +1,395 @@ +path: "tensorflow.train" +tf_module { + member { + name: "AdadeltaOptimizer" + mtype: "" + } + member { + name: "AdagradDAOptimizer" + mtype: "" + } + member { + name: "AdagradOptimizer" + mtype: "" + } + member { + name: "AdamOptimizer" + mtype: "" + } + member { + name: "BytesList" + mtype: "" + } + member { + name: "CheckpointSaverHook" + mtype: "" + } + member { + name: "CheckpointSaverListener" + mtype: "" + } + member { + name: "ChiefSessionCreator" + mtype: "" + } + member { + name: "ClusterDef" + mtype: "" + } + member { + name: "ClusterSpec" + mtype: "" + } + member { + name: "Coordinator" + mtype: "" + } + member { + name: "Example" + mtype: "" + } + member { + name: "ExponentialMovingAverage" + mtype: "" + } + member { + name: "Feature" + mtype: "" + } + member { + name: "FeatureList" + mtype: "" + } + member { + name: "FeatureLists" + mtype: "" + } + member { + name: "Features" + mtype: "" + } + member { + name: "FeedFnHook" + mtype: "" + } + member { + name: "FinalOpsHook" + mtype: "" + } + member { + name: "FloatList" + mtype: "" + } + member { + name: "FtrlOptimizer" + mtype: "" + } + member { + name: "GlobalStepWaiterHook" + mtype: "" + } + member { + name: "GradientDescentOptimizer" + mtype: "" + } + member { + name: "Int64List" + mtype: "" + } + member { + name: "JobDef" + mtype: "" + } + member { + name: "LoggingTensorHook" + mtype: "" + } + member { + name: "LooperThread" + mtype: "" + } + member { + name: "MomentumOptimizer" + mtype: "" + } + member { + name: "MonitoredSession" + mtype: "" + } + member { + name: "NanLossDuringTrainingError" + mtype: "" + } + member { + name: "NanTensorHook" + mtype: "" + } + member { + name: "Optimizer" + mtype: "" + } + member { + name: "ProximalAdagradOptimizer" + mtype: "" + } + member { + name: "ProximalGradientDescentOptimizer" + mtype: "" + } + member { + name: "QueueRunner" + mtype: "" + } + member { + name: "RMSPropOptimizer" + mtype: "" + } + member { + name: "Saver" + mtype: "" + } + member { + name: "SaverDef" + mtype: "" + } + member { + name: "Scaffold" + mtype: "" + } + member { + name: "SecondOrStepTimer" + mtype: "" + } + member { + name: "SequenceExample" + mtype: "" + } + member { + name: "Server" + mtype: "" + } + member { + name: "ServerDef" + mtype: "" + } + member { + name: "SessionCreator" + mtype: "" + } + member { + name: "SessionManager" + mtype: "" + } + member { + name: "SessionRunArgs" + mtype: "" + } + member { + name: "SessionRunContext" + mtype: "" + } + member { + name: "SessionRunHook" + mtype: "" + } + member { + name: "SessionRunValues" + mtype: "" + } + member { + name: "SingularMonitoredSession" + mtype: "" + } + member { + name: "StepCounterHook" + mtype: "" + } + member { + name: "StopAtStepHook" + mtype: "" + } + member { + name: "SummarySaverHook" + mtype: "" + } + member { + name: "Supervisor" + mtype: "" + } + member { + name: "SyncReplicasOptimizer" + mtype: "" + } + member { + name: "WorkerSessionCreator" + mtype: "" + } + member { + name: "queue_runner" + mtype: "" + } + member_method { + name: "MonitoredTrainingSession" + argspec: "args=[\'master\', \'is_chief\', \'checkpoint_dir\', \'scaffold\', \'hooks\', \'chief_only_hooks\', \'save_checkpoint_secs\', \'save_summaries_steps\', \'save_summaries_secs\', \'config\', \'stop_grace_period_secs\', \'log_step_count_steps\'], varargs=None, keywords=None, defaults=[\'\', \'True\', \'None\', \'None\', \'None\', \'None\', \'600\', \'100\', \'None\', \'None\', \'120\', \'100\'], " + } + member_method { + name: "NewCheckpointReader" + argspec: "args=[\'filepattern\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "add_queue_runner" + argspec: "args=[\'qr\', \'collection\'], varargs=None, keywords=None, defaults=[\'queue_runners\'], " + } + member_method { + name: "assert_global_step" + argspec: "args=[\'global_step_tensor\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "basic_train_loop" + argspec: "args=[\'supervisor\', \'train_step_fn\', \'args\', \'kwargs\', \'master\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'\'], " + } + member_method { + name: "batch" + argspec: "args=[\'tensors\', \'batch_size\', \'num_threads\', \'capacity\', \'enqueue_many\', \'shapes\', \'dynamic_pad\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'32\', \'False\', \'None\', \'False\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "batch_join" + argspec: "args=[\'tensors_list\', \'batch_size\', \'capacity\', \'enqueue_many\', \'shapes\', \'dynamic_pad\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'32\', \'False\', \'None\', \'False\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "checkpoint_exists" + argspec: "args=[\'checkpoint_prefix\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "create_global_step" + argspec: "args=[\'graph\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "do_quantize_training_on_graphdef" + argspec: "args=[\'input_graph\', \'num_bits\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "exponential_decay" + argspec: "args=[\'learning_rate\', \'global_step\', \'decay_steps\', \'decay_rate\', \'staircase\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "export_meta_graph" + argspec: "args=[\'filename\', \'meta_info_def\', \'graph_def\', \'saver_def\', \'collection_list\', \'as_text\', \'graph\', \'export_scope\', \'clear_devices\'], varargs=None, keywords=kwargs, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'False\', \'None\', \'None\', \'False\'], " + } + member_method { + name: "generate_checkpoint_state_proto" + argspec: "args=[\'save_dir\', \'model_checkpoint_path\', \'all_model_checkpoint_paths\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_checkpoint_mtimes" + argspec: "args=[\'checkpoint_prefixes\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "get_checkpoint_state" + argspec: "args=[\'checkpoint_dir\', \'latest_filename\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_global_step" + argspec: "args=[\'graph\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "get_or_create_global_step" + argspec: "args=[\'graph\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "global_step" + argspec: "args=[\'sess\', \'global_step_tensor\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "import_meta_graph" + argspec: "args=[\'meta_graph_or_file\', \'clear_devices\', \'import_scope\'], varargs=None, keywords=kwargs, defaults=[\'False\', \'None\'], " + } + member_method { + name: "input_producer" + argspec: "args=[\'input_tensor\', \'element_shape\', \'num_epochs\', \'shuffle\', \'seed\', \'capacity\', \'shared_name\', \'summary_name\', \'name\', \'cancel_op\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'True\', \'None\', \'32\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "inverse_time_decay" + argspec: "args=[\'learning_rate\', \'global_step\', \'decay_steps\', \'decay_rate\', \'staircase\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "latest_checkpoint" + argspec: "args=[\'checkpoint_dir\', \'latest_filename\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "limit_epochs" + argspec: "args=[\'tensor\', \'num_epochs\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "match_filenames_once" + argspec: "args=[\'pattern\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "maybe_batch" + argspec: "args=[\'tensors\', \'keep_input\', \'batch_size\', \'num_threads\', \'capacity\', \'enqueue_many\', \'shapes\', \'dynamic_pad\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'32\', \'False\', \'None\', \'False\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "maybe_batch_join" + argspec: "args=[\'tensors_list\', \'keep_input\', \'batch_size\', \'capacity\', \'enqueue_many\', \'shapes\', \'dynamic_pad\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'32\', \'False\', \'None\', \'False\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "maybe_shuffle_batch" + argspec: "args=[\'tensors\', \'batch_size\', \'capacity\', \'min_after_dequeue\', \'keep_input\', \'num_threads\', \'seed\', \'enqueue_many\', \'shapes\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'None\', \'False\', \'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "maybe_shuffle_batch_join" + argspec: "args=[\'tensors_list\', \'batch_size\', \'capacity\', \'min_after_dequeue\', \'keep_input\', \'seed\', \'enqueue_many\', \'shapes\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "natural_exp_decay" + argspec: "args=[\'learning_rate\', \'global_step\', \'decay_steps\', \'decay_rate\', \'staircase\', \'name\'], varargs=None, keywords=None, defaults=[\'False\', \'None\'], " + } + member_method { + name: "piecewise_constant" + argspec: "args=[\'x\', \'boundaries\', \'values\', \'name\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "polynomial_decay" + argspec: "args=[\'learning_rate\', \'global_step\', \'decay_steps\', \'end_learning_rate\', \'power\', \'cycle\', \'name\'], varargs=None, keywords=None, defaults=[\'0.0001\', \'1.0\', \'False\', \'None\'], " + } + member_method { + name: "range_input_producer" + argspec: "args=[\'limit\', \'num_epochs\', \'shuffle\', \'seed\', \'capacity\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'True\', \'None\', \'32\', \'None\', \'None\'], " + } + member_method { + name: "replica_device_setter" + argspec: "args=[\'ps_tasks\', \'ps_device\', \'worker_device\', \'merge_devices\', \'cluster\', \'ps_ops\', \'ps_strategy\'], varargs=None, keywords=None, defaults=[\'0\', \'/job:ps\', \'/job:worker\', \'True\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "shuffle_batch" + argspec: "args=[\'tensors\', \'batch_size\', \'capacity\', \'min_after_dequeue\', \'num_threads\', \'seed\', \'enqueue_many\', \'shapes\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'1\', \'None\', \'False\', \'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "shuffle_batch_join" + argspec: "args=[\'tensors_list\', \'batch_size\', \'capacity\', \'min_after_dequeue\', \'seed\', \'enqueue_many\', \'shapes\', \'allow_smaller_final_batch\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'None\', \'False\', \'None\', \'None\'], " + } + member_method { + name: "slice_input_producer" + argspec: "args=[\'tensor_list\', \'num_epochs\', \'shuffle\', \'seed\', \'capacity\', \'shared_name\', \'name\'], varargs=None, keywords=None, defaults=[\'None\', \'True\', \'None\', \'32\', \'None\', \'None\'], " + } + member_method { + name: "start_queue_runners" + argspec: "args=[\'sess\', \'coord\', \'daemon\', \'start\', \'collection\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'True\', \'True\', \'queue_runners\'], " + } + member_method { + name: "string_input_producer" + argspec: "args=[\'string_tensor\', \'num_epochs\', \'shuffle\', \'seed\', \'capacity\', \'shared_name\', \'name\', \'cancel_op\'], varargs=None, keywords=None, defaults=[\'None\', \'True\', \'None\', \'32\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "summary_iterator" + argspec: "args=[\'path\'], varargs=None, keywords=None, defaults=None" + } + member_method { + name: "update_checkpoint_state" + argspec: "args=[\'save_dir\', \'model_checkpoint_path\', \'all_model_checkpoint_paths\', \'latest_filename\'], varargs=None, keywords=None, defaults=[\'None\', \'None\'], " + } + member_method { + name: "write_graph" + argspec: "args=[\'graph_or_graph_def\', \'logdir\', \'name\', \'as_text\'], varargs=None, keywords=None, defaults=[\'True\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.queue_runner.-queue-runner.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.queue_runner.-queue-runner.pbtxt new file mode 100644 index 00000000000..23d402de308 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.queue_runner.-queue-runner.pbtxt @@ -0,0 +1,49 @@ +path: "tensorflow.train.queue_runner.QueueRunner" +tf_class { + is_instance: "" + is_instance: "" + member { + name: "cancel_op" + mtype: "" + } + member { + name: "close_op" + mtype: "" + } + member { + name: "enqueue_ops" + mtype: "" + } + member { + name: "exceptions_raised" + mtype: "" + } + member { + name: "name" + mtype: "" + } + member { + name: "queue" + mtype: "" + } + member { + name: "queue_closed_exception_types" + mtype: "" + } + member_method { + name: "__init__" + argspec: "args=[\'self\', \'queue\', \'enqueue_ops\', \'close_op\', \'cancel_op\', \'queue_closed_exception_types\', \'queue_runner_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'None\', \'None\', \'None\', \'None\', \'None\'], " + } + member_method { + name: "create_threads" + argspec: "args=[\'self\', \'sess\', \'coord\', \'daemon\', \'start\'], varargs=None, keywords=None, defaults=[\'None\', \'False\', \'False\'], " + } + member_method { + name: "from_proto" + argspec: "args=[\'queue_runner_def\', \'import_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } + member_method { + name: "to_proto" + argspec: "args=[\'self\', \'export_scope\'], varargs=None, keywords=None, defaults=[\'None\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.train.queue_runner.pbtxt b/tensorflow/tools/api/golden/tensorflow.train.queue_runner.pbtxt new file mode 100644 index 00000000000..6e2d0430496 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.train.queue_runner.pbtxt @@ -0,0 +1,15 @@ +path: "tensorflow.train.queue_runner" +tf_module { + member { + name: "QueueRunner" + mtype: "" + } + member_method { + name: "add_queue_runner" + argspec: "args=[\'qr\', \'collection\'], varargs=None, keywords=None, defaults=[\'queue_runners\'], " + } + member_method { + name: "start_queue_runners" + argspec: "args=[\'sess\', \'coord\', \'daemon\', \'start\', \'collection\'], varargs=None, keywords=None, defaults=[\'None\', \'None\', \'True\', \'True\', \'queue_runners\'], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.truncated_normal_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.truncated_normal_initializer.pbtxt new file mode 100644 index 00000000000..7c48f4af076 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.truncated_normal_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.truncated_normal_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'mean\', \'stddev\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'0.0\', \'1.0\', \'None\', \"\"], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.uniform_unit_scaling_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.uniform_unit_scaling_initializer.pbtxt new file mode 100644 index 00000000000..4558db619e8 --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.uniform_unit_scaling_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.uniform_unit_scaling_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'factor\', \'seed\', \'dtype\'], varargs=None, keywords=None, defaults=[\'1.0\', \'None\', \"\"], " + } +} diff --git a/tensorflow/tools/api/golden/tensorflow.zeros_initializer.pbtxt b/tensorflow/tools/api/golden/tensorflow.zeros_initializer.pbtxt new file mode 100644 index 00000000000..8313009a68c --- /dev/null +++ b/tensorflow/tools/api/golden/tensorflow.zeros_initializer.pbtxt @@ -0,0 +1,10 @@ +path: "tensorflow.zeros_initializer" +tf_class { + is_instance: "" + is_instance: "" + is_instance: "" + member_method { + name: "__init__" + argspec: "args=[\'self\', \'dtype\'], varargs=None, keywords=None, defaults=[\"\"], " + } +} diff --git a/tensorflow/tools/api/lib/BUILD b/tensorflow/tools/api/lib/BUILD new file mode 100644 index 00000000000..cdfa0e7be52 --- /dev/null +++ b/tensorflow/tools/api/lib/BUILD @@ -0,0 +1,39 @@ +# Helper libraries for TensorFlow API compatibility test. + +package( + default_visibility = ["//tensorflow/tools/api:__subpackages__"], +) + +licenses(["notice"]) # Apache 2.0 + +load( + "//tensorflow/core:platform/default/build_config.bzl", + "tf_proto_library", +) + +tf_proto_library( + name = "api_objects_proto", + srcs = ["api_objects.proto"], +) + +py_library( + name = "python_object_to_proto_visitor", + srcs = ["python_object_to_proto_visitor.py"], + srcs_version = "PY2AND3", + deps = [ + ":api_objects_proto_py", + "//tensorflow/tools/common:traverse", + ], +) + +filegroup( + name = "all_files", + srcs = glob( + ["**/*"], + exclude = [ + "**/METADATA", + "**/OWNERS", + ], + ), + visibility = ["//tensorflow:__subpackages__"], +) diff --git a/tensorflow/tools/api/lib/api_objects.proto b/tensorflow/tools/api/lib/api_objects.proto new file mode 100644 index 00000000000..0966a5f1d53 --- /dev/null +++ b/tensorflow/tools/api/lib/api_objects.proto @@ -0,0 +1,31 @@ +syntax = "proto2"; + +package third_party.tensorflow.tools.api; + +message TFAPIMember { + optional string name = 1; + optional string mtype = 2; +}; + +message TFAPIMethod { + optional string name = 1; + optional string path = 2; + optional string argspec = 3; +}; + +message TFAPIModule { + repeated TFAPIMember member = 1; + repeated TFAPIMethod member_method = 2; +}; + +message TFAPIClass { + repeated string is_instance = 1; + repeated TFAPIMember member = 2; + repeated TFAPIMethod member_method = 3; +}; + +message TFAPIObject { + optional string path = 1; + optional TFAPIModule tf_module = 2; + optional TFAPIClass tf_class = 3; +}; diff --git a/tensorflow/tools/api/lib/python_object_to_proto_visitor.py b/tensorflow/tools/api/lib/python_object_to_proto_visitor.py new file mode 100644 index 00000000000..64092a6441b --- /dev/null +++ b/tensorflow/tools/api/lib/python_object_to_proto_visitor.py @@ -0,0 +1,168 @@ +# Copyright 2015 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# ============================================================================== +"""A visitor class that generates protobufs for each pyton object.""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import inspect + +from tensorflow.python.platform import tf_logging as logging +from tensorflow.tools.api.lib import api_objects_pb2 + + +# Following object need to be handled individually. +_CORNER_CASES = { + '': {'contrib': {'name': 'contrib', + 'mtype': ''}, + 'tools': {}}, + 'test.TestCase': {}, + 'test.TestCase.failureException': {}, +} + + +def _SanitizedArgSpec(obj): + """Get an ArgSpec string that is free of addresses. + + We have callables as function arg defaults. This results in addresses in + getargspec output. This function returns a sanitized string list of base + classes. + + Args: + obj: A python routine for us the create the sanitized arspec of. + + Returns: + string, a string representation of the argspec. + """ + output_string = '' + unsanitized_arg_spec = inspect.getargspec(obj) + + for clean_attr in ('args', 'varargs', 'keywords'): + output_string += '%s=%s, ' % (clean_attr, + getattr(unsanitized_arg_spec, clean_attr)) + + if unsanitized_arg_spec.defaults: + sanitized_defaults = [] + for val in unsanitized_arg_spec.defaults: + str_val = str(val) + if ' object at 0x' in str_val: + sanitized_defaults.append('%s instance>' % str_val.split(' at ')[0]) + else: + sanitized_defaults.append(str_val) + + output_string += 'defaults=%s, ' % sanitized_defaults + + else: + output_string += 'defaults=None' + + return output_string + + +def _SanitizedMRO(obj): + """Get a list of superclasses with minimal amount of non-TF classes. + + Based on many parameters like python version, OS, protobuf implementation + or changes in google core libraries the list of superclasses of a class + can change. We only return the first non-TF class to be robust to non API + affecting changes. The Method Resolution Order returned by inspect.getmro + is still maintained in the return value. + + Args: + obj: A python routine for us the create the sanitized arspec of. + + Returns: + list of strings, string representation of the class names. + """ + return_list = [] + for cls in inspect.getmro(obj): + str_repr = str(cls) + return_list.append(str_repr) + if 'tensorflow' not in str_repr: + break + + return return_list + + +class PythonObjectToProtoVisitor(object): + """A visitor that summarizes given python objects as protobufs.""" + + def __init__(self): + # A dict to store all protocol buffers. + # Keyed by "path" to the object. + self._protos = {} + + def GetProtos(self): + """Return the list of protos stored.""" + return self._protos + + def __call__(self, path, parent, children): + # The path to the object. + lib_path = 'tensorflow.%s' % path if path else 'tensorflow' + + # A small helper method to construct members(children) protos. + def _AddMember(member_name, member_obj, proto): + """Add the child object to the object being constructed.""" + if member_name == '__init__' or not member_name.startswith('_'): + if inspect.isroutine(member_obj): + new_method = proto.member_method.add() + new_method.name = member_name + # If member_obj is a python builtin, there is no way to get its + # argspec, because it is implemented on the C side. It also has no + # func_code. + if getattr(member_obj, 'func_code', None): + new_method.argspec = _SanitizedArgSpec(member_obj) + else: + new_member = proto.member.add() + new_member.name = member_name + new_member.mtype = str(type(member_obj)) + + parent_corner_cases = _CORNER_CASES.get(path, {}) + + if path not in _CORNER_CASES or parent_corner_cases: + # Decide if we have a module or a class. + if inspect.ismodule(parent): + # Create a module object. + module_obj = api_objects_pb2.TFAPIModule() + for name, child in children: + if name in parent_corner_cases: + # If we have an empty entry, skip this object. + if parent_corner_cases[name]: + module_obj.member.add(**(parent_corner_cases[name])) + else: + _AddMember(name, child, module_obj) + + # Store the constructed module object. + self._protos[lib_path] = api_objects_pb2.TFAPIObject( + path=lib_path, tf_module=module_obj) + elif inspect.isclass(parent): + # Construct a class. + class_obj = api_objects_pb2.TFAPIClass() + class_obj.is_instance.extend(_SanitizedMRO(parent)) + for name, child in children: + if name in parent_corner_cases: + # If we have an empty entry, skip this object. + if parent_corner_cases[name]: + module_obj.member.add(**(parent_corner_cases[name])) + else: + _AddMember(name, child, class_obj) + + # Store the constructed class object. + self._protos[lib_path] = api_objects_pb2.TFAPIObject( + path=lib_path, tf_class=class_obj) + else: + logging.error('Illegal call to ApiProtoDump::_py_obj_to_proto.' + 'Object is neither a module nor a class: %s', path) diff --git a/tensorflow/tools/api/tests/API_UPDATE_WARNING.txt b/tensorflow/tools/api/tests/API_UPDATE_WARNING.txt new file mode 100644 index 00000000000..54b0cfcb3c1 --- /dev/null +++ b/tensorflow/tools/api/tests/API_UPDATE_WARNING.txt @@ -0,0 +1,7 @@ +Golden file update requested! +All test failures have been skipped, see the logs for detected diffs. +This test is now going to write new golden files. +Make sure to package the updates together with your change. + +You will need an explicit API approval. This may take longer than a normal +review. diff --git a/tensorflow/tools/api/tests/BUILD b/tensorflow/tools/api/tests/BUILD new file mode 100644 index 00000000000..bfee211dca4 --- /dev/null +++ b/tensorflow/tools/api/tests/BUILD @@ -0,0 +1,43 @@ +# TensorFlow API backwards compatibility tests. + +package( + default_visibility = ["//tensorflow/tools/api:__subpackages__"], +) + +licenses(["notice"]) # Apache 2.0 + +exports_files([ + "README.txt", + "API_UPDATE_WARNING.txt", +]) + +py_test( + name = "api_compatibility_test", + srcs = ["api_compatibility_test.py"], + data = [ + "//tensorflow/tools/api/golden:api_golden", + "//tensorflow/tools/api/tests:API_UPDATE_WARNING.txt", + "//tensorflow/tools/api/tests:README.txt", + ], + srcs_version = "PY2AND3", + deps = [ + "//tensorflow:tensorflow_py", + "//tensorflow/python:platform", + "//tensorflow/tools/api/lib:python_object_to_proto_visitor", + "//tensorflow/tools/common:public_api", + "//tensorflow/tools/common:traverse", + "@protobuf//:protobuf_python", + ], +) + +filegroup( + name = "all_files", + srcs = glob( + ["**/*"], + exclude = [ + "**/METADATA", + "**/OWNERS", + ], + ), + visibility = ["//tensorflow:__subpackages__"], +) diff --git a/tensorflow/tools/api/tests/README.txt b/tensorflow/tools/api/tests/README.txt new file mode 100644 index 00000000000..3463eeec6fe --- /dev/null +++ b/tensorflow/tools/api/tests/README.txt @@ -0,0 +1,13 @@ +TensorFlow API backwards compatibility test +This test ensures all changes to the public API of TensorFlow are intended. + +If this test fails, it means a change has been made to the public API. Backwards +incompatible changes are not allowed. You can run the test as follows to update +test goldens and package them with your change. + + $ bazel build tensorflow/tools/api/tests:api_compatibility_test + $ bazel-bin/tensorflow/tools/api/tests/api_compatibility_test \ + --update_goldens True + +You will need an API approval to make changes to the public TensorFlow API. This +includes additions to the API. diff --git a/tensorflow/tools/api/tests/api_compatibility_test.py b/tensorflow/tools/api/tests/api_compatibility_test.py new file mode 100644 index 00000000000..27865fdc89b --- /dev/null +++ b/tensorflow/tools/api/tests/api_compatibility_test.py @@ -0,0 +1,238 @@ +# Copyright 2015 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +# ============================================================================== +"""TensorFlow API compatibility tests. + +This test ensures all changes to the public API of TensorFlow are intended. + +If this test fails, it means a change has been made to the public API. Backwards +incompatible changes are not allowed. You can run the test with +"--update_goldens" flag set to "True" to update goldens when making changes to +the public TF python API. +""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import argparse +import os +import re +import sys +import unittest + +import tensorflow as tf + +from google.protobuf import text_format + +from tensorflow.python.lib.io import file_io +from tensorflow.python.platform import resource_loader +from tensorflow.python.platform import test +from tensorflow.python.platform import tf_logging as logging +from tensorflow.tools.api.lib import api_objects_pb2 +from tensorflow.tools.api.lib import python_object_to_proto_visitor +from tensorflow.tools.common import public_api +from tensorflow.tools.common import traverse + +# FLAGS defined at the bottom: +FLAGS = None +# DEFINE_boolean, update_goldens, default False: +_UPDATE_GOLDENS_HELP = """ + Update stored golden files if API is updated. WARNING: All API changes + have to be authorized by TensorFlow leads. +""" + +# DEFINE_boolean, verbose_diffs, default False: +_VERBOSE_DIFFS_HELP = """ + If set to true, print line by line diffs on all libraries. If set to + false, only print which libraries have differences. +""" + +_API_GOLDEN_FOLDER = 'tensorflow/tools/api/golden' +_TEST_README_FILE = 'tensorflow/tools/api/tests/README.txt' +_UPDATE_WARNING_FILE = 'tensorflow/tools/api/tests/API_UPDATE_WARNING.txt' + + +def _KeyToFilePath(key): + """From a given key, construct a filepath.""" + def _ReplaceCapsWithDash(matchobj): + match = matchobj.group(0) + return '-%s' % (match.lower()) + + case_insensitive_key = re.sub('([A-Z]{1})', _ReplaceCapsWithDash, key) + return os.path.join(_API_GOLDEN_FOLDER, '%s.pbtxt' % case_insensitive_key) + + +def _FileNameToKey(filename): + """From a given filename, construct a key we use for api objects.""" + def _ReplaceDashWithCaps(matchobj): + match = matchobj.group(0) + return match[1].upper() + + base_filename = os.path.basename(filename) + base_filename_without_ext = os.path.splitext(base_filename)[0] + api_object_key = re.sub( + '((-[a-z]){1})', _ReplaceDashWithCaps, base_filename_without_ext) + return api_object_key + + +class ApiCompatibilityTest(test.TestCase): + + def __init__(self, *args, **kwargs): + super(ApiCompatibilityTest, self).__init__(*args, **kwargs) + + golden_update_warning_filename = os.path.join( + resource_loader.get_root_dir_with_all_resources(), + _UPDATE_WARNING_FILE) + self._update_golden_warning = file_io.read_file_to_string( + golden_update_warning_filename) + + test_readme_filename = os.path.join( + resource_loader.get_root_dir_with_all_resources(), + _TEST_README_FILE) + self._test_readme_message = file_io.read_file_to_string( + test_readme_filename) + + def _AssertProtoDictEquals(self, + expected_dict, + actual_dict, + verbose=False, + update_goldens=False): + """Diff given dicts of protobufs and report differences a readable way. + + Args: + expected_dict: a dict of TFAPIObject protos constructed from golden + files. + actual_dict: a ict of TFAPIObject protos constructed by reading from the + TF package linked to the test. + verbose: Whether to log the full diffs, or simply report which files were + different. + update_goldens: Whether to update goldens when there are diffs found. + """ + diffs = [] + verbose_diffs = [] + + expected_keys = set(expected_dict.keys()) + actual_keys = set(actual_dict.keys()) + only_in_expected = expected_keys - actual_keys + only_in_actual = actual_keys - expected_keys + all_keys = expected_keys | actual_keys + + # This will be populated below. + updated_keys = [] + + for key in all_keys: + diff_message = '' + verbose_diff_message = '' + # First check if the key is not found in one or the other. + if key in only_in_expected: + diff_message = 'Object %s expected but not found (removed).' % key + verbose_diff_message = diff_message + elif key in only_in_actual: + diff_message = 'New object %s found (added).' % key + verbose_diff_message = diff_message + else: + # Now we can run an actual proto diff. + try: + self.assertProtoEquals(expected_dict[key], actual_dict[key]) + except AssertionError as e: + updated_keys.append(key) + diff_message = 'Change detected in python object: %s.' % key + verbose_diff_message = str(e) + + # All difference cases covered above. If any difference found, add to the + # list. + if diff_message: + diffs.append(diff_message) + verbose_diffs.append(verbose_diff_message) + + # If diffs are found, handle them based on flags. + if diffs: + diff_count = len(diffs) + logging.error(self._test_readme_message) + logging.error('%d differences found between API and golden.', diff_count) + messages = verbose_diffs if verbose else diffs + for i in range(diff_count): + logging.error('Issue %d\t: %s', i + 1, messages[i]) + + if update_goldens: + # Write files if requested. + logging.warning(self._update_golden_warning) + + # If the keys are only in expected, some objects are deleted. + # Remove files. + for key in only_in_expected: + filepath = _KeyToFilePath(key) + file_io.delete_file(filepath) + + # If the files are only in actual (current library), these are new + # modules. Write them to files. Also record all updates in files. + for key in only_in_actual | set(updated_keys): + filepath = _KeyToFilePath(key) + file_io.write_string_to_file( + filepath, text_format.MessageToString(actual_dict[key])) + else: + # Fail if we cannot fix the test by updating goldens. + self.fail('%d differences found between API and golden.' % diff_count) + + else: + logging.info('No differences found between API and golden.') + + @unittest.skipUnless( + sys.version_info.major == 2 and os.uname()[0] == 'Linux', + 'API compabitility test goldens are generated using python2 on Linux.') + def testAPIBackwardsCompatibility(self): + # Extract all API stuff. + visitor = python_object_to_proto_visitor.PythonObjectToProtoVisitor() + traverse.traverse(tf, public_api.PublicAPIVisitor(visitor)) + proto_dict = visitor.GetProtos() + + # Read all golden files. + expression = os.path.join( + resource_loader.get_root_dir_with_all_resources(), + _KeyToFilePath('*')) + golden_file_list = file_io.get_matching_files(expression) + + def _ReadFileToProto(filename): + """Read a filename, create a protobuf from its contents.""" + ret_val = api_objects_pb2.TFAPIObject() + text_format.Merge(file_io.read_file_to_string(filename), ret_val) + return ret_val + + golden_proto_dict = { + _FileNameToKey(filename): _ReadFileToProto(filename) + for filename in golden_file_list + } + + # Diff them. Do not fail if called with update. + # If the test is run to update goldens, only report diffs but do not fail. + self._AssertProtoDictEquals( + golden_proto_dict, + proto_dict, + verbose=FLAGS.verbose_diffs, + update_goldens=FLAGS.update_goldens) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + parser.add_argument( + '--update_goldens', type=bool, default=False, help=_UPDATE_GOLDENS_HELP) + parser.add_argument( + '--verbose_diffs', type=bool, default=False, help=_VERBOSE_DIFFS_HELP) + FLAGS, unparsed = parser.parse_known_args() + + # Now update argv, so that unittest library does not get confused. + sys.argv = [sys.argv[0]] + unparsed + test.main() diff --git a/tensorflow/tools/ci_build/pylintrc b/tensorflow/tools/ci_build/pylintrc index 0779ed91bc3..e71017e621c 100644 --- a/tensorflow/tools/ci_build/pylintrc +++ b/tensorflow/tools/ci_build/pylintrc @@ -38,7 +38,7 @@ enable=indexing-exception,old-raise-syntax # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use"--disable=all --enable=classes # --disable=W" -disable=design,similarities,no-self-use,attribute-defined-outside-init,locally-disabled,star-args,pointless-except,bad-option-value,global-statement,fixme,suppressed-message,useless-suppression,locally-enabled,no-member,no-name-in-module,import-error,unsubscriptable-object,unbalanced-tuple-unpacking,undefined-variable +disable=design,similarities,no-self-use,attribute-defined-outside-init,locally-disabled,star-args,pointless-except,bad-option-value,global-statement,fixme,suppressed-message,useless-suppression,locally-enabled,no-member,no-name-in-module,import-error,unsubscriptable-object,unbalanced-tuple-unpacking,undefined-variable,not-context-manager # Set the cache size for astng objects. @@ -322,4 +322,4 @@ indent-after-paren=4 [GOOGLE LINES] # Regexp for a proper copyright notice. -copyright=Copyright \d{4} The TensorFlow Authors\. +All [Rr]ights [Rr]eserved\. \ No newline at end of file +copyright=Copyright \d{4} The TensorFlow Authors\. +All [Rr]ights [Rr]eserved\. diff --git a/tensorflow/tools/common/public_api.py b/tensorflow/tools/common/public_api.py index 3364ff6bc9a..173b39c538a 100644 --- a/tensorflow/tools/common/public_api.py +++ b/tensorflow/tools/common/public_api.py @@ -48,7 +48,8 @@ class PublicAPIVisitor(object): 'pywrap_tensorflow', # TODO(drpng): This can be removed once sealed. 'user_ops', # TODO(drpng): This can be removed once sealed. 'python', - 'tools' + 'tools', + 'tensorboard', ], # Some implementations have this internal module that we shouldn't expose. diff --git a/tensorflow/tools/docs/BUILD b/tensorflow/tools/docs/BUILD index b425e93aa10..8e27b133c2f 100644 --- a/tensorflow/tools/docs/BUILD +++ b/tensorflow/tools/docs/BUILD @@ -99,6 +99,20 @@ py_binary( ], ) +py_test( + name = "build_docs_test", + size = "small", + srcs = ["build_docs_test.py"], + data = ["//tensorflow:docs_src"], + srcs_version = "PY2AND3", + tags = ["manual"], + deps = [ + ":generate_lib", + "//tensorflow:tensorflow_py", + "//tensorflow/python/debug:debug_py", + ], +) + py_binary( name = "generate_1_0", srcs = ["generate_1_0.py"], diff --git a/tensorflow/tools/docs/build_docs_test.py b/tensorflow/tools/docs/build_docs_test.py new file mode 100644 index 00000000000..08c27729a78 --- /dev/null +++ b/tensorflow/tools/docs/build_docs_test.py @@ -0,0 +1,52 @@ +# Copyright 2015 The TensorFlow Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# ============================================================================== +"""Run the python doc generator and fail if there are any broken links.""" + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os + +import tensorflow as tf +from tensorflow.python import debug as tf_debug +from tensorflow.python.platform import googletest +from tensorflow.python.platform import resource_loader +from tensorflow.tools.docs import generate_lib + + +class Flags(object): + resource_root = resource_loader.get_root_dir_with_all_resources() + src_dir = os.path.join(resource_root, 'third_party/tensorflow/docs_src') + base_dir = os.path.join(resource_root, 'third_party/tensorflow/') + output_dir = googletest.GetTempDir() + + +class BuildDocsTest(googletest.TestCase): + + def testBuildDocs(self): + doc_generator = generate_lib.DocGenerator() + + doc_generator.set_py_modules([('tf', tf), ('tfdbg', tf_debug)]) + doc_generator.load_contrib() + + status = doc_generator.build(Flags()) + + if status: + self.fail('Found %s Errors!' % status) + + +if __name__ == '__main__': + googletest.main()