An Open Source Machine Learning Framework for Everyone

Python PyPI

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.

TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Discuss for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development:

Fuzzing Status CII Best Practices Contributor Covenant

Continuous build status

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux GPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows GPU Status Temporarily Unavailable Nightly Binary Official GCS

Community Supported Builds

Build Type Status Artifacts
Linux AMD ROCm GPU Nightly Build Status Nightly
Linux AMD ROCm GPU Stable Release Build Status Release 1.15 / 2.x
Linux s390x Nightly Build Status Nightly
Linux s390x CPU Stable Release Build Status Release
Linux ppc64le CPU Nightly Build Status Nightly
Linux ppc64le CPU Stable Release Build Status Release 1.15 / 2.x
Linux ppc64le GPU Nightly Build Status Nightly
Linux ppc64le GPU Stable Release Build Status Release 1.15 / 2.x
Linux aarch64 CPU Nightly (Linaro) Build Status Nightly
Linux aarch64 CPU Stable Release (Linaro) Build Status Release 1.x & 2.x
Linux aarch64 CPU Nightly (OpenLab)
Python 3.6
Build Status Nightly
Linux aarch64 CPU Stable Release (OpenLab) Build Status Release 1.15 / 2.x
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Nightly Build Status Nightly
Linux CPU with Intel oneAPI Deep Neural Network Library (oneDNN) Stable Release Build Status Release 1.15 / 2.x
Red Hat® Enterprise Linux® 7.6 CPU & GPU
Python 2.7, 3.6
Build Status 1.13.1 PyPI

Community Supported Containers

Container Type Status Artifacts
TensorFlow aarch64 Neoverse-N1 CPU Stable (Linaro)
Debian
Static Release 2.3

Resources

Learn more about the TensorFlow community and how to contribute.

License

Apache License 2.0

Comments
  • Error : Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

    Error : Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

    Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template

    System information

    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
    • TensorFlow installed from (source or binary): Source and Binary (tried both)
    • TensorFlow version: 1.12
    • Python version: 3.6
    • Installed using virtualenv? pip? conda?: conda
    • Bazel version (if compiling from source): 0.18
    • GCC/Compiler version (if compiling from source): gcc 5.4.0
    • CUDA/cuDNN version: Cudnn - 7.4 , CUDA- 9.0
    • GPU model and memory: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.8225 8GB

    Describe the problem I tried installting tensorflow 1.12 using both pip install and building from source.However when I am trying to run faster rcnn model i get following error message: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.

    I only get this with tf 1.12 and python 3.6 ,it works fine with python 3.6

    Provide the exact sequence of commands / steps that you executed before running into the problem

    Any other info / logs Traceback (most recent call last): File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1334, in _do_call return fn(*args) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1319, in _run_fn options, feed_dict, fetch_list, target_list, run_metadata) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun run_metadata) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_4__cf__7)]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_21/Gather/GatherV2_2/_211}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7500_...GatherV2_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 103, in worker initializer(*initargs) File "detection_app.py", line 67, in worker output_q.put(y.get_stats_and_detection(frame)) File "/home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py", line 142, in get_stats_and_detection boxes, scores, classes, num = self.processFrame(img) File "/home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py", line 76, in processFrame feed_dict={self.image_tensor: image_np_expanded}) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 929, in run run_metadata_ptr) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1152, in _run feed_dict_tensor, options, run_metadata) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1328, in _do_run run_metadata) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1348, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D (defined at /home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py:36) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_4__cf__7)]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_21/Gather/GatherV2_2/_211}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7500_...GatherV2_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

    Caused by op 'FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D', defined at: File "detection_app.py", line 94, in pool = Pool(args.num_workers, worker, (input_q, output_q)) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/context.py", line 119, in Pool context=self.get_context()) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 174, in init self._repopulate_pool() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 239, in _repopulate_pool w.start() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 105, in start self._popen = self._Popen(self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/context.py", line 277, in _Popen return Popen(process_obj) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/popen_fork.py", line 19, in init self._launch(process_obj) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/popen_fork.py", line 73, in _launch code = process_obj._bootstrap() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/multiprocessing/pool.py", line 103, in worker initializer(*initargs) File "detection_app.py", line 62, in worker y = DetectorAPI() File "/home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py", line 36, in init tf.import_graph_def(od_graph_def, name='') File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 488, in new_func return func(*args, **kwargs) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def _ProcessNewOps(graph) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 234, in _ProcessNewOps for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3440, in _add_new_tf_operations for c_op in c_api_util.new_tf_operations(self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3440, in for c_op in c_api_util.new_tf_operations(self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3299, in _create_op_from_tf_operation ret = Operation(c_op, self) File "/home/user/anaconda3/envs/tf_faust/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1770, in init self._traceback = tf_stack.extract_stack()

    UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[node FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D (defined at /home/user/faster_rcnn_inception_v2_coco_2018_01_28/base_model.py:36) = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](FeatureExtractor/MobilenetV1/MobilenetV1/Conv2d_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, FeatureExtractor/MobilenetV1/Conv2d_0/weights/read/_4__cf__7)]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_21/Gather/GatherV2_2/_211}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_7500_...GatherV2_2", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

  • Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes and No (described below)
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Manjaro
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
    • TensorFlow installed from (source or binary): tf-nightly-gpu (Dec 19, r1.13)
    • TensorFlow version (use command below): 1.13.0-dev20181219
    • Python version: 3.7.1
    • Bazel version (if compiling from source):
    • GCC/Compiler version (if compiling from source):
    • CUDA/cuDNN version: CUDA 10 with cuDNN 7.4.1
    • GPU model and memory: RTX 2070 8GB

    Describe the current behavior I'm running the CNN model on MNIST. When I'm running with the GPU, I am encountering 2018-12-20 20:09:13.644176: E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

    I did some digging and realized that it is a memory issue (which shouldn't be the case as I have 32GB of RAM and 64GB of swap. I ran htop when running the model and I have 20+GB free, which is more than enough to fit the 8GB vRAM mappings.

    Using the gpu_options.allow_growth = True gets the model to work properly, and setting os.environ['CUDA_VISIBLE_DEVICES'] = '-1' also works. This means that I AM facing a memory issue, but I don't see how.

    Also, using gpu_options.allow_growth = True does not fix the same issue when trying to run tensorflow/models/official/mnist/ model, which should have a similar behavior with my code.

    Code to reproduce the issue

    import os
    import tensorflow as tf
    from tensorflow.examples.tutorials.mnist import input_data
    import math
    import time
    # Killing optional CPU driver warnings
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
    # os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
    tf.logging.set_verbosity(tf.logging.ERROR)
    
    
    class Model:
    
        def __init__(self, image, label):
            """
            A Model class contains a computational graph that classifies images
            to predictions. Each of its methods builds part of the graph
            on Model initialization. Do not modify the constructor, as doing so
            would break the autograder. You may, however, add class variables
            to use in your graph-building. e.g. learning rate, 
    
            image: the input image to the computational graph as a tensor
            label: the correct label of an image as a tensor
            prediction: the output prediction of the computational graph,
                        produced by self.forward_pass()
            optimize: the model's optimizing tensor produced by self.optimizer()
            loss: the model's loss produced by computing self.loss_function()
            accuracy: the model's prediction accuracy
            """
            self.image = image
            self.label = label
    
            # TO-DO: Add any class variables you want to use.
    
            self.prediction = self.forward_pass()
            self.loss = self.loss_function()
            self.optimize = self.optimizer()
            self.accuracy = self.accuracy_function()
    
        def forward_pass(self):
            """
            Predicts a label given an image using convolution layers
    
            :return: the prediction as a tensor
            """
            filter_1 = tf.Variable(tf.truncated_normal([3, 3, 1, 8], stddev=0.1))
            conv_1 = tf.nn.conv2d(self.image, filter_1, [1, 1, 1, 1], "SAME")
    
            reshaped = tf.reshape(conv_1, shape=[50, -1])
    
            L1 = reshaped.shape[1].value
            L2 = 500
            W1 = tf.Variable(tf.random_normal([L1, L2], mean=0, stddev=0.01))
            b1 = tf.Variable(tf.random_normal([L2], mean=0, stddev=0.01))
            relu_1 = tf.nn.relu(tf.matmul(reshaped, W1) + b1)
    
            W2 = tf.Variable(tf.random_normal([L2, 10], mean=0, stddev=0.01))
            b2 = tf.Variable(tf.random_normal([10], mean=0, stddev=0.01))
            logits = tf.nn.relu(tf.matmul(relu_1, W2) + b2)
            return logits
    
        def loss_function(self):
            """
            Calculates the model cross-entropy loss
    
            :return: the loss of the model as a tensor
            """
            loss = tf.losses.softmax_cross_entropy(onehot_labels=self.label, logits=self.prediction)
            return loss
    
        def optimizer(self):
            """
            Optimizes the model loss using an Adam Optimizer
    
            :return: the optimizer as a tensor
            """
            learning_rate = 0.1
            sgd = tf.train.GradientDescentOptimizer(learning_rate)
            train = sgd.minimize(self.loss)
            return train
    
        def accuracy_function(self):
            """
            Calculates the model's prediction accuracy by comparing
            predictions to correct labels – no need to modify this
    
            :return: the accuracy of the model as a tensor
            """
            correct_prediction = tf.equal(tf.argmax(self.prediction, 1),
                                          tf.argmax(self.label, 1))
            return tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    
    
    def main():
        t_start = time.time()
    
        mnist = input_data.read_data_sets("data/mnist/", one_hot=True)
        batch_sz = 50
        batch = 2000
    
        inputs = tf.placeholder(shape=[batch_sz, 28, 28, 1], dtype=tf.float32)
        labels = tf.placeholder(shape=[batch_sz, 10], dtype=tf.float32)
    
        model = Model(inputs, labels)
    
        session_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
        sess = tf.Session(config=session_config)
    
        # sess = tf.Session()
    
        sess.run(tf.global_variables_initializer())
        for i in range(batch):
            next_image, next_label = mnist.train.next_batch(batch_sz)
            next_image = next_image.reshape((batch_sz, 28, 28, 1))
            sess.run(model.optimize, feed_dict={inputs: next_image, labels: next_label})
    
        acc, test_images, test_labels = 0, mnist.test.images, mnist.test.labels
        test_batch = math.ceil(len(test_images) / batch_sz)
        for i in range(test_batch):
            batch_images = test_images[i * batch_sz: (i + 1) * batch_sz]
            batch_images = batch_images.reshape((batch_sz, 28, 28, 1))
            batch_labes = test_labels[i * batch_sz: (i + 1) * batch_sz]
            acc += sess.run(model.accuracy, feed_dict={inputs: batch_images, labels: batch_labes})
        acc /= test_batch
        print(acc)
    
        print(time.time() - t_start, 'seconds')
    
        return
    
    
    if __name__ == '__main__':
        main()
    
  • Win10: ImportError: DLL load failed: The specified module could not be found

    Win10: ImportError: DLL load failed: The specified module could not be found

    System information:

    Have I written custom code: No OS Platform and Distribution: Windows 10 Pro updated Mobile device: None TensorFlow installed from: pip install TensorFlow version: 1.11.0 Python Version: 3.6.6 Bazel version: not installed CUDA/cuDNN version: CUDA 9.0, cuDNN 8.0 GPU model and memory: GF-GTX970 STRIX Exact command to reproduce: pip install tensorflow pip install tensorflow-gpu python import tensorflow as tf

    Problem

    I have had this error consistently even after trying to downgrade to older versions of CUDA tool, cuDNN, python, tensorflow and tensorflow-gpu. I have updated my enviornment variables. I have installed Visual C++ Redistributable Update. I have read and tried to follow the solutions from other similar issues (such as #10033 and #17101), but have not succeeded in fixing the problem.

    Log

    C:\Users\user>python Python 3.6.6 (v3.6.6:4cf1f54eb7, Jun 27 2018, 03:37:03) [MSC v.1900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. <> import tensorflow as tf Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "", line 1, in File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow_init_.py", line 22, in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python_init_.py", line 49, in from tensorflow.python import pywrap_tensorflow File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in raise ImportError(msg) ImportError: Traceback (most recent call last): File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 243, in load_module return load_dynamic(name, filename, file) File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\imp.py", line 343, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.

  • Windows Support and Documentation

    Windows Support and Documentation

    I was excited to see tensorflow, but as many other users, we are on Windows, would be nice to see this support happen. Will you accept Windows port contributions?

    In the meantime, Microsoft recently released their Deep Learning toolkit which scales on multiple machines with GPUs for both Linux and Windows. https://github.com/Microsoft/CNTK

  • Upgrade to CuDNN 7 and CUDA 9

    Upgrade to CuDNN 7 and CUDA 9

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows Server 2012
    • TensorFlow installed from (source or binary): binary
    • TensorFlow version (use command below): 1.3.0-rc1
    • Python version: 3.5.2
    • Bazel version (if compiling from source): N/A
    • CUDA/cuDNN version: CUDA V8.0.44, CuDNN 6.0
    • GPU model and memory: Nvidia GeForce GTX 1080 Ti, 11 GB
    • Exact command to reproduce: N/A

    Describe the problem

    Please upgrade TensorFlow to support CUDA 9 and CuDNN 7. Nvidia claims this will provide a 2x performance boost on Pascal GPUs.

  • Windows C++ tensorflow_cc.dll has overlapping memory address between string gpu options for

    Windows C++ tensorflow_cc.dll has overlapping memory address between string gpu options for "allocator type" and "visible device list"

    Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: NA
    • TensorFlow installed from (source or binary): source
    • TensorFlow version (use command below): 1.12.0 branched from 5b900cfe4b3b848f577315a0dde09a729f770e95
    • Python version: NA
    • Bazel version (if compiling from source): 0.19.2
    • GCC/Compiler version (if compiling from source): MSVC 2015
    • CUDA/cuDNN version: 10.0.130, 9.2.148
    • GPU model and memory: NVIDIA GP100 16Gb

    You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with: NA

    Describe the current behavior

    I am creating as session as follows adapted from original code

       std::unique_ptr<tensorflow::Session>* session;
       tensorflow::SessionOptions options;
       tensorflow::ConfigProto* config = &options.config;
       float fraction =0.8;
       int whichGPU = 0;
       int cuda_device_count=1;
       tensorflow::GraphDef graph_def;
       tensorflow::status = tensorflow::ReadBinaryProto(tensorflow::Env::Default(), "C:\\\models\\graph.pb", &graph_def);
       auto* device_count = options.config.mutable_device_count();
       device_count->insert({ "GPU", cuda_device_count });
       device_count->insert({ "CPU", 1 });
       options.config.mutable_gpu_options()->set_per_process_gpu_memory_fraction(fraction);
       options.config.mutable_gpu_options()->set_visible_device_list(std::to_string(whichGPU));
       session->reset(tensorflow::NewSession(options));
      (*session)->Create(graph_def);
    

    which results in

        70 2020-05-12 09:41:28.214176: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] 
        Found device 0 with properties: 
       71 name: Quadro GP100 major: 6 minor: 0 memoryClockRate(GHz): 1.4425
       72 pciBusID: 0000:01:00.0
       73 totalMemory: 16.00GiB freeMemory: 13.28GiB
       74 2020-05-12 09:41:28.215329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] 
    Adding visible gpu devices: 0
       75 2020-05-12 09:41:28.952392: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
       76 2020-05-12 09:41:28.952785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
       77 2020-05-12 09:41:28.953095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
        78 2020-05-12 09:41:28.953962: E tensorflow/core/common_runtime/gpu/gpu_process_state.cc:106] Invalid allocator type: 0
       79 2020-05-12 09:41:28.954425: E tensorflow/core/common_runtime/session.cc:64] Failed to create session: Internal: Failed to get memory allocator for TF GPU 0 with 6899999744 bytes of memory.
    

    Describe the expected behavior

    Session is created and runs on GPU 0 only using only 80% of available memory

    Standalone code to reproduce the issue

    #include "tensorflow/core/protobuf/control_flow.pb.h"
    #include "tensorflow/core/protobuf/config.pb.h"
    #include <iostream>
    
    int main() {
      tensorflow::GPUOptions gpu_options;
    
      gpu_options.set_visible_device_list("0");
    
      std::cout << "allocator_type " << gpu_options.allocator_type() << std::endl; //print 0
    
    }
    

    Other info / logs

    Please see the following issues https://github.com/tensorflow/tensorflow/issues/16291 https://github.com/fo40225/tensorflow-windows-wheel/issues/39

    I have built my tensorflow.dll as follows:

    $ENV:USE_BAZEL_VERSION="0.19.2" $ENV:PYTHON_BIN_PATH=C:\ProgramData\Anaconda3\python.exe $ENV:Path += ";C:\msys64\usr\bin" $ENV:Path += ";C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\bin" $ENV:Path += ";C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2\extras\CUPTI\libx64" $ENV:Path += ";C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cudnn-9.2-windows10-x64-v7.5.0.56\cuda\bin" $ENV:BAZEL_SH = "C:\msys64\usr\bin\bash.exe" $ENV:CUDA_TOOLKIT_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v9.2" $ENV:TF_CUDA_VERSION="9.2" $ENV:CUDNN_INSTALL_PATH="C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cudnn-9.2-windows10-x64-v7.5.0.56\cuda" $ENV:TF_CUDNN_VERSION="7" $ENV:TF_NCCL_VERSION="1" $ENV:TF_CUDA_COMPUTE_CAPABILITIES="3.5,3.7,5.0,5.2,6.0,6.1" $ENV:TF_CUDA_CLANG="0" $ENV:TF_NEED_CUDA="1" $ENV:TF_NEED_ROCM="0" $ENV:TF_NEED_OPENCL_SYCL="0"

    $params = "configure.py","" Remove-Item -Recurse -Force "C:\Windows\system32\config\systemprofile_bazel_SYSTEM\install\75b09cf1ac98c0ffb0534079b30efcc4" cmd /c "ECHO Y" | & python.exe @params bazel.exe clean --expunge bazel.exe build --copt=-nvcc_options=disable-warnings --test_tag_filters=-no_oss,-gpu,-benchmark-test,-nomac,-no_mac --announce_rc --test_timeout 300,450,1200,3600 --test_size_filters=small,medium --jobs=12 //tensorflow:libtensorflow_cc.so //tensorflow:libtensorflow_framework.so

    edits have been made to the following files:

    within

    tensorflow/BUILD

    `"//tensorflow:windows": [],`
    

    becomes

    "//tensorflow:windows": [
                "-def:" +  # This line must be directly followed by the exported_symbols_msvc.lds file
                "$(location //tensorflow:tf_exported_symbols_msvc.lds)",
            ],
    

    and within tf_cc_shared_object the function of tensorflow/BUILD

        visibility = ["//visibility:public"],
        deps = [
            "//tensorflow:tf_exported_symbols.lds",
            "//tensorflow:tf_version_script.lds",
            "//tensorflow/c:c_api",
            "//tensorflow/c/eager:c_api",
    

    becomes

        visibility = ["//visibility:public"],
        deps = [
            "//tensorflow:tf_exported_symbols.lds",
            "//tensorflow:tf_exported_symbols_msvc.lds",
            "//tensorflow:tf_version_script.lds",
            "//tensorflow/c:c_api",
            "//tensorflow/c/eager:c_api",
    

    The contents of tf_exported_symbols_msvc.lds are

    LIBRARY tensorflow_cc
    EXPORTS
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@@[email protected][email protected]@[email protected]@@[email protected]@XZ
        [email protected]@[email protected]@@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@XZ
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@@Z
        [email protected]@@[email protected]@A
        [email protected]@@[email protected]@[email protected]@[email protected]@@Z
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@@Z
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@XZ
        [email protected]@@[email protected]@[email protected]@@Z
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@XZ
        [email protected]@[email protected][email protected]@[email protected]@@[email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected]@[email protected]@[email protected]@@@Z
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@@Z
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@AEBAXXZ
        [email protected]@@[email protected]@A
        [email protected]@[email protected]@[email protected]@@Z
        [email protected]@@[email protected]
        [email protected]@@[email protected]@[email protected]@[email protected]@Z
        [email protected]@[email protected]@@[email protected]@[email protected]
        [email protected]@[email protected]@@[email protected]@[email protected][email protected][email protected]@@@Z
        [email protected]@[email protected]@AEAAXXZ
        [email protected]@@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected][email protected]@[email protected]@@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]@@Z
        [email protected]@@[email protected]
        [email protected]@@[email protected]@@Z
        [email protected]@@3QEBDEB
        [email protected]@@3QEBDEB
        [email protected]@@3QEBDEB
        [email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@@2QBDB
        [email protected]@[email protected]@@[email protected]@[email protected][email protected][email protected]@@@Z
        [email protected]@[email protected]@@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]@[email protected]@[email protected]@Z
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@XZ
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@AEAAXXZ
        [email protected]@[email protected]@@[email protected][email protected]@[email protected]@@[email protected]@XZ
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@XZ
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@@[email protected]
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@AEBAXXZ
        [email protected]@[email protected]@[email protected]@@Z
        [email protected]@[email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected]@A
        [email protected][email protected]@[email protected]@@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@XZ
        [email protected]@[email protected]@[email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]@[email protected]
        [email protected]@[email protected]@[email protected]@Z
        [email protected]@@[email protected]
        [email protected]@[email protected]@[email protected]@[email protected]@@[email protected]@@[email protected]@Z
        [email protected]@@[email protected]@@Z
        [email protected]@[email protected]@@[email protected][email protected][email protected]@[email protected]@[email protected]@[email protected]@[email protected]@@[email protected]
    

    As documented by https://github.com/tensorflow/tensorflow/issues/22047#issuecomment-421452033

    My software is linked against libprotobuf.lib from https://mirror.bazel.build/github.com/google/protobuf/archive/v3.6.0.tar.gz

    built as

    cmake -G "Visual Studio 14 2015 Win64"  .. -DCMAKE_INSTALL_PREFIX="%current%\protobuf-3.6.0" -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_SHARED_LIBS=ON -Dprotobuf_MSVC_STATIC_RUNTIME=OFF
    cmake --build . --target install --config Release -- /maxcpucount:12
    

    I also tried editing tensorflow\tf_version_script.lds to include

    *protobuf*
    

    I also tried the TF_EXPORT macro from #include "tensorflow/core/platform/macros.h"

    in tensorflow/core/public/session_options.h and tensorflow/core/common_runtime/session_options.cc

    as suggested by https://github.com/sitting-duck/stuff/tree/master/ai/tensorflow/build_tensorflow_1.14_source_for_Windows

    Do you have any suggestions about how to make sure that

    the GPU options for allocator type and visible device list do not share the same memory but we still have a monolithic DLL under windows?

  • Quantization-Aware Training support in Keras

    Quantization-Aware Training support in Keras

    System information

    • TensorFlow version (you are using): 1.13.1 (but willing to use 2.0.0-alpha0 if there is a good reason)
    • Are you willing to contribute it (Yes/No): Yes (given some pointers on how to best go about it)

    Describe the feature and the current behavior/state. Currently there is no obvious way to apply tf.contrib.quantize.create_training_graph to a keras model. The keras API only allows access to the graph after it has already created a session. Attempting to modify the graph at this point does not work: https://stackoverflow.com/questions/55123417/quantization-aware-retraining-a-keras-model https://stackoverflow.com/questions/52259343/quantize-a-keras-neural-network-model

    I have also tried to create a new session after rewriting the graph, without success:

    tf.contrib.quantize.create_training_graph(input_graph=tf.keras.backend.get_session().graph, quant_delay=0)
    # create a new session after rewriting the graph
    new_session = tf.Session()
    tf.keras.backend.set_session(new_session)
    

    Results in this error when I try to fit the model:

    tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable dense_5/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/dense_5/bias/class tensorflow::Var does not exist.
            [[{{node dense_5/BiasAdd/ReadVariableOp}}]]
    

    Will this change the current api? How? Probably, but in a backwards-compatible way. I imagine some kind of graph rewriting hook would probably be necessary in the tf.keras API.

    Who will benefit with this feature? Users of TF Lite / Edge TPU wishing to easily train quantized models using the keras API (which is being pushed as the new "one true API" for tensorflow).

    Any Other info. Related issue on the main keras project https://github.com/keras-team/keras/issues/11105

  • Unable to install TensorFlow on Python3.7 with pip

    Unable to install TensorFlow on Python3.7 with pip

    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): N/A
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 10.13
    • TensorFlow installed from (source or binary): binary
    • TensorFlow version (use command below): 1.8
    • Python version: 3.7
    • Bazel version (if compiling from source): N/A
    • GCC/Compiler version (if compiling from source): N/A
    • CUDA/cuDNN version: N/A
    • GPU model and memory: N/A
    • Exact command to reproduce: pip install tensorflow

    Describe the problem

    Installing TensorFlow on Python3.7 with pip failed. Please see the failure log below.

    Source code / logs

    Could not find a version that satisfies the requirement tensorflow (from versions: ) No matching distribution found for tensorflow

  • Crash: Could not create cuDNN handle when convnets are used

    Crash: Could not create cuDNN handle when convnets are used

    Tensorflow (GPU) was imported successfully, but when running a session that involves a convolutional neural network (CNN), Python crashes with the following message:

    E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
    F tensorflow/core/kernels/conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms) 
    

    The problem persists on any combination of CUDA toolkit 7.5/8.0 and Tensorflow installed from pip/source. Test sessions that do not use CNNs are run successfully.

    What related GitHub issues or StackOverflow threads have you found by searching the web for your problem?

    The issue is similar to https://github.com/tensorflow/tensorflow/issues/6586, where I first commented. But since I experience the problem on a Mac, I was suggested to open a separate issue.

    Environment info

    Operating System: macOS Sierra 10.12.2 Xcode version 8.2 (8C38) (When I later tried CUDA 7.5, I installed Command Line Tools version 7.3.1 because CUDA 7.5 lacked support of the more recent compilers.) Python 3.5.2 (anaconda)

    Installed version of CUDA: tried both 8.0 (initially) and 7.5 (reported here, toolkit only -- the driver is still 8.0) Installed version of cuDNN: 5.1 (different installations according to CUDA versions) (please attach the output of ls -l /path/to/cuda/lib/libcud*):

    lrwxr-xr-x  1 root   wheel        33  5 Jan 20:33 /usr/local/cuda/lib/libcuda.1.dylib -> /usr/local/cuda/lib/libcuda.dylib
    [email protected] 1 root   wheel      8280 13 Apr  2016 /usr/local/cuda/lib/libcuda.dylib
    [email protected] 1 root   wheel        45 13 Apr  2016 /usr/local/cuda/lib/libcudadevrt.a -> /Developer/NVIDIA/CUDA-7.5/lib/libcudadevrt.a
    [email protected] 1 root   wheel        50 13 Apr  2016 /usr/local/cuda/lib/libcudart.7.5.dylib -> /Developer/NVIDIA/CUDA-7.5/lib/libcudart.7.5.dylib
    [email protected] 1 root   wheel        46 13 Apr  2016 /usr/local/cuda/lib/libcudart.dylib -> /Developer/NVIDIA/CUDA-7.5/lib/libcudart.dylib
    [email protected] 1 root   wheel        49 13 Apr  2016 /usr/local/cuda/lib/libcudart_static.a -> /Developer/NVIDIA/CUDA-7.5/lib/libcudart_static.a
    lrwxr-xr-x  1 root   wheel        16  5 Jan 17:14 /usr/local/cuda/lib/libcudnn.5 -> libcudnn.5.dylib
    [email protected] 1 ymfa   staff  58975112 10 Jun  2016 /usr/local/cuda/lib/libcudnn.5.dylib
    [email protected] 1 ymfa   staff        16 10 Jun  2016 /usr/local/cuda/lib/libcudnn.dylib -> libcudnn.5.dylib
    lrwxr-xr-x  1 root   wheel        16  5 Jan 17:14 /usr/local/cuda/lib/libcudnn5.dylib -> libcudnn.5.dylib
    [email protected] 1 ymfa   staff  56392320 10 Jun  2016 /usr/local/cuda/lib/libcudnn_static.a
    

    I tried both installing from pip and source. I first installed from binary pip package:

    1. A link to the pip package you installed: tensorflow-gpu
    2. The output from python -c "import tensorflow; print(tensorflow.__version__)". 0.12.head

    Later I installed from source (the pip package was uninstalled):

    1. The commit hash (git rev-parse HEAD) d67c09d98a576e1fbf2f3609ddb842e53890f31c

    2. The output of bazel version

      Build label: 0.4.3-homebrew Build target: bazel-out/local-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar Build time: Thu Dec 22 15:20:15 2016 (1482420015) Build timestamp: 1482420015 Build timestamp as int: 1482420015

    If possible, provide a minimal reproducible example

    I made a minimal example by simplifying the network and reducing the training data to only twenty images and two classes for classification. issue.zip contains the Python code and the data. I wrote two convolutional layers because I found the network with only one convolutional layer runs without problem.

    Complete log using CUDA 7.5 and Tensorflow compiled from source

    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcublas.7.5.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcudnn.5.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcufft.7.5.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcuda.1.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUDA library libcurand.7.5.dylib locally
    W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
    W tensorflow/core/platform/cpu_feature_guard.cc:95] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
    I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:874] OS X does not support NUMA - returning NUMA node zero
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
    name: GeForce GT 650M
    major: 3 minor: 0 memoryClockRate (GHz) 0.9
    pciBusID 0000:01:00.0
    Total memory: 1023.69MiB
    Free memory: 740.18MiB
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 650M, pci bus id: 0000:01:00.0)
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
    F tensorflow/core/kernels/conv_ops.cc:605] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms) 
    

    Complete log using CUDA 8.0 and Tensorflow installed from pip

    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcuda.1.dylib locally
    I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcurand.dylib locally
    I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] OS X does not support NUMA - returning NUMA node zero
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 
    name: GeForce GT 650M
    major: 3 minor: 0 memoryClockRate (GHz) 0.9
    pciBusID 0000:01:00.0
    Total memory: 1023.69MiB
    Free memory: 590.00MiB
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y 
    I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 650M, pci bus id: 0000:01:00.0)
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:385] could not create cudnn handle: CUDNN_STATUS_NOT_INITIALIZED
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:392] error retrieving driver version: Invalid argument: expected %d.%d or %d.%d.%d form for driver version; got ""
    E tensorflow/stream_executor/cuda/cuda_dnn.cc:352] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM
    F tensorflow/core/kernels/conv_ops.cc:532] Check failed: stream->parent()->GetConvolveAlgorithms(&algorithms)
    
  • ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

    I installed tf-nightly build and I get the following error on import of tensorflow. ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory.

    If I check for cuda 9, I get the following:

    ldconfig -v
    /usr/local/cuda-8.0/targets/x86_64-linux/lib:
    	libnvgraph.so.8.0 -> libnvgraph.so.8.0.61
    	libnppicom.so.8.0 -> libnppicom.so.8.0.61
    	libnppial.so.8.0 -> libnppial.so.8.0.61
    	libcufftw.so.8.0 -> libcufftw.so.8.0.61
    	libcufft.so.8.0 -> libcufft.so.8.0.61
    	libnppif.so.8.0 -> libnppif.so.8.0.61
    	libcublas.so.8.0 -> libcublas.so.8.0.88
    	libnvblas.so.8.0 -> libnvblas.so.8.0.88
    	libnppi.so.8.0 -> libnppi.so.8.0.61
    	libcusolver.so.8.0 -> libcusolver.so.8.0.61
    	libnppidei.so.8.0 -> libnppidei.so.8.0.61
    	libnvrtc-builtins.so.8.0 -> libnvrtc-builtins.so.8.0.61
    	libnvrtc.so.8.0 -> libnvrtc.so.8.0.61
    	libnpps.so.8.0 -> libnpps.so.8.0.61
    	libcuinj64.so.8.0 -> libcuinj64.so.8.0.61
    	libnppig.so.8.0 -> libnppig.so.8.0.61
    	libOpenCL.so.1 -> libOpenCL.so.1.0.0
    	libnppicc.so.8.0 -> libnppicc.so.8.0.61
    	libnppist.so.8.0 -> libnppist.so.8.0.61
    	libnppisu.so.8.0 -> libnppisu.so.8.0.61
    	libnppim.so.8.0 -> libnppim.so.8.0.61
    	libcurand.so.8.0 -> libcurand.so.8.0.61
    	libcudart.so.8.0 -> libcudart.so.8.0.61
    	libnvToolsExt.so.1 -> libnvToolsExt.so.1.0.0
    	libnppitc.so.8.0 -> libnppitc.so.8.0.61
    	libnppc.so.8.0 -> libnppc.so.8.0.61
    	libcusparse.so.8.0 -> libcusparse.so.8.0.61
    /usr/local/cuda-9.1/targets/x86_64-linux/lib:
    	libnppicc.so.9.1 -> libnppicc.so.9.1.85
    	libnppisu.so.9.1 -> libnppisu.so.9.1.85
    	libcufftw.so.9.1 -> libcufftw.so.9.1.85
    	libcufft.so.9.1 -> libcufft.so.9.1.85
    	libnppial.so.9.1 -> libnppial.so.9.1.85
    	libnppist.so.9.1 -> libnppist.so.9.1.85
    	libcublas.so.9.1 -> libcublas.so.9.1.85
    	libnvblas.so.9.1 -> libnvblas.so.9.1.85
    	libnppitc.so.9.1 -> libnppitc.so.9.1.85
    	libcusolver.so.9.1 -> libcusolver.so.9.1.85
    	libnvrtc.so.9.1 -> libnvrtc.so.9.1.85
    	libnvrtc-builtins.so.9.1 -> libnvrtc-builtins.so.9.1.85
    	libnppidei.so.9.1 -> libnppidei.so.9.1.85
    	libOpenCL.so.1 -> libOpenCL.so.1.0.0
    	libnppig.so.9.1 -> libnppig.so.9.1.85
    	libnppc.so.9.1 -> libnppc.so.9.1.85
    	libcudart.so.9.1 -> libcudart.so.9.1.85
    	libnvToolsExt.so.1 -> libnvToolsExt.so.1.0.0
    	libnvgraph.so.9.1 -> libnvgraph.so.9.1.85
    	libnppif.so.9.1 -> libnppif.so.9.1.85
    	libcusparse.so.9.1 -> libcusparse.so.9.1.85
    	libaccinj64.so.9.1 -> libaccinj64.so.9.1.85
    	libcuinj64.so.9.1 -> libcuinj64.so.9.1.85
    	libnppim.so.9.1 -> libnppim.so.9.1.85
    	libnppicom.so.9.1 -> libnppicom.so.9.1.85
    	libnpps.so.9.1 -> libnpps.so.9.1.85
    	libcurand.so.9.1 -> libcurand.so.9.1.85
    

    I that due to a name mismatch. libcublas.so.9.0 =! libcublas.so.9.1? And if so how can we overcome this?

  • [Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite?

    [Question&Error] Is there detection model like a SSD-Mobile-net in tensorflow-lite?

    HI.

    Developing an android application using tensorflow-lite.

    https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/lite/g3doc/models.md Not found detection model.

    Also, I try to convert SSD-Inceptionv2 using tensorflow-lite-API. But there seems to be a problem.

    ##Command

    
    bazel run --config=opt --copt=-msse4.1 --copt=-msse4.2 \
      //tensorflow/contrib/lite/toco:toco -- \
      --input_file=/home/danshin/tensorflow_lite/lite_model/fire_incpetion_v2.pb \
      --output_file=/home/danshin/tensorflow_lite/lite_model/fire_inception_v2.lite \
      --input_format=TENSORFLOW_GRAPHDEF \
      --output_format=TFLITE \
      --inference_type=FLOAT \
      --input_shape=1,300,300,3 \
      --input_array=image_tensor \
      --output_array={detection_boxes,detection_scores,detection_classes,num_detections}
    

    ##Error code

    
    2017-12-26 14:59:25.159220: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 2029 operators, 3459 arrays (0 quantized)
    2017-12-26 14:59:25.251633: F tensorflow/contrib/lite/toco/graph_transformations/resolve_tensorflow_switch.cc:95] Check failed: other_op->type == OperatorType::kTensorFlowMerge 
    

    The fire_inception_v2 file is created, but its size is zero bytes. What is a problem?

    also, please let me know what's the best way to deploy custom model for object detection?

    Somebody help me plz!.

    thank you.

  • Unhandle exception

    Unhandle exception

    Please go to Stack Overflow for help and support:

    https://stackoverflow.com/questions/tagged/tensorflow

    If you open a GitHub issue, here is our policy:

    1. It must be a bug, a feature request, or a significant problem with the documentation (for small docs fixes please send a PR instead).
    2. The form below must be filled out.
    3. It shouldn't be a TensorBoard issue. Those go here.

    Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


    System information

    • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on a mobile device:
    • TensorFlow installed from (source or binary):
    • TensorFlow version (use command below):
    • Python version:
    • Bazel version (if compiling from source):
    • GCC/Compiler version (if compiling from source):
    • CUDA/cuDNN version:
    • GPU model and memory:
    • Exact command to reproduce:

    You can collect some of this information using our environment capture script:

    https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

    You can obtain the TensorFlow version with:

    python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
    

    Describe the problem

    Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.

    Source code / logs

    Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

  • SparseFillEmptyRowsGrad crash

    SparseFillEmptyRowsGrad crash

    Click to expand!

    Issue Type

    Bug

    Source

    binary

    Tensorflow Version

    2.11.0.dev20220914

    Custom Code

    No

    OS Platform and Distribution

    No response

    Mobile device

    No response

    Python version

    No response

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    When giving empty list into SparseFillEmptyRowsGrad, it crashes with abort. As a public API, it should check it earlier and throw an python exception.
    

    Standalone code to reproduce the issue

    Reproduced on CoLab with CUDA backend: https://colab.research.google.com/drive/1nUyX5iKxKRWR2m3NHOZdoeXaUh8s3P6Z?usp=sharing

    import tensorflow as tf
    print(tf.__version__)
    
    tf.raw_ops.SparseFillEmptyRowsGrad(reverse_index_map=[], grad_values=[])
    

    Relevant log output

    import tensorflow as tf
    print(tf.__version__)
    
    tf.raw_ops.SparseFillEmptyRowsGrad(reverse_index_map=[], grad_values=[])
    
    2.8.2
    2022-09-20 14:44:40.812917: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
    2022-09-20 14:44:40.847990: F ./tensorflow/core/util/gpu_launch_config.h:129] Check failed: work_element_count > 0 (0 vs. 0)
    /bin/bash: line 1:   124 Aborted                 (core dumped) python bug.py 2>&1
    
  • tensorflow 2.9.1-gpu error kernel driver does not appear to be running on this host

    tensorflow 2.9.1-gpu error kernel driver does not appear to be running on this host

    Click to expand!

    Issue Type

    Build/Install

    Source

    binary

    Tensorflow Version

    2.9.1-gpu

    Custom Code

    No

    OS Platform and Distribution

    Ubuntu 18.04.1

    Mobile device

    n/a

    Python version

    3.8.10

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    Tensorflow 2.9.1-gpu is installed from Dockerfile.
    
    Why is the kernel driver missing in the host if the image is gpu support "tensorflow/tensorflow:2.9.1-gpu"?
    

    Standalone code to reproduce the issue

    tf-docker /app > python3
    Python 3.8.10 (default, Jun 22 2022, 20:18:18) 
    [GCC 9.4.0] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    
    >>> import tensorflow as tf
    2022-09-20 14:11:26.407642: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
    
    >>> print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
    2022-09-20 14:11:47.727435: E tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: UNKNOWN ERROR (34)
    2022-09-20 14:11:47.727516: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (mv-inference-5f7d446b77-kmsxn): /proc/driver/nvidia/version does not exist
    Num GPUs Available:  0
    
    >>> print(tf.__version__)
    2.9.1
    

    Relevant log output

    No response

  • A flag to enable the previous RNG behavior for `tf.keras.initializers`

    A flag to enable the previous RNG behavior for `tf.keras.initializers`

    Issue Type

    Feature Request

    Source

    binary

    Tensorflow Version

    TF 2.10

    Custom Code

    No

    Current Behaviour?

    According to release note, RNG behavior change for tf.keras.initializers since TF 2.10.

    In particular, tf.random.set_seed(seed) is not enough to get the same model weights - we need to set explicit seeds in the initializers.

    In general, this is NOT an issue. However, for testing purpose, this makes things much more difficult. For example, let's say we have MyCustomModel, implemented with layers without specifying any initializer (therefore, no explicit seed with initializer).

    However, if we want to have a CI test that verifies the model produces the same result, it is impossible with TF 2.10 due to the new change. With TF < 2.10, we can set tf.random.set_seed(seed=0), and the model weights will be the same, so we can check if the output matches the expected values.

    It would be great if there is a way to have previous behavior for testing purpose.

  • Refactoring the GEMM interface

    Refactoring the GEMM interface

    This PR replaces current GEMM-related methods (dozens of them, all with 10+ arguments) in the stream executor interface with a few methods, and packs their arguments into structures. This substantially shrinks the code and makes the interface easily extensible (if a new optional argument, like the recently added compute precision setting, needs to be passed through to stream executor, it only needs to be added to the structure definition, instead of modifying 50 function signatures.)

  • Tensorflow failed to build with Download from archive/13c6828bedeb815ee7748f82ca36073dbd55a9db.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found

    Tensorflow failed to build with Download from archive/13c6828bedeb815ee7748f82ca36073dbd55a9db.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found

    Click to expand!

    Issue Type

    Build/Install

    Source

    source

    Tensorflow Version

    master branch commit a098414

    Custom Code

    No

    OS Platform and Distribution

    windows server 2019

    Mobile device

    No response

    Python version

    No response

    Bazel version

    No response

    GCC/Compiler version

    No response

    CUDA/cuDNN version

    No response

    GPU model and memory

    No response

    Current Behaviour?

    WARNING:c
    Loading: 0 packages loaded
    Analyzing: target //tensorflow/tools/pip_package:build_pip_package (1 packages loaded, 0 targets configured)
    WARNING: Download from https://mirror.bazel.build/github.com/bazelbuild/rules_cc/archive/081771d4a0e9d7d3aa0eed2ef389fa4700dfb23e.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/openxla/stablehlo/archive/9ca259d5092e9cf2c1fa0788a470df6a4fc95f0a.zip failed: class java.io.FileNotFoundException GET returned 404 Not Found
    Analyzing: target //tensorflow/tools/pip_package:build_pip_package (272 packages loaded, 4050 targets configured)
    Analyzing: target //tensorflow/tools/pip_package:build_pip_package (528 packages loaded, 28314 targets configured)
    ERROR: F:/gitp/tensorflow/tensorflow/tensorflow/python/BUILD:3654:8: in cmd attribute of genrule rule //tensorflow/python:pywrap_tensorflow_import_lib_file: variable '$<' : no input file
    ERROR: F:/gitp/tensorflow/tensorflow/tensorflow/python/BUILD:3654:8: Analysis of target '//tensorflow/python:pywrap_tensorflow_import_lib_file' failed
    ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: 
    INFO: Elapsed time: 533.961s
    INFO: 0 processes.
    FAILED: Build did NOT complete successfully (529 packages loaded, 29595 targets configured)
    FAILED: Build did NOT complete successfully (529 packages loaded, 29595 targets configured)
    

    Standalone code to reproduce the issue

    1. git clone https://github.com/tensorflow/tensorflow F:\gitP\tensorflow\tensorflow
    2. cd F:\gitP\tensorflow\tensorflow
    3. set VSCMD_SKIP_SENDTELEMETRY=1 & "C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\Common7\Tools\VsDevCmd.bat" -host_arch=amd64 -arch=amd64
    4. python.exe -m pip install evergreen.py 2>&1
    5. pip3 install six numpy wheel 2>&1
    6. pip3 install keras_applications==1.0.6 --no-deps 2>&1
    7. pip3 install keras_preprocessing==1.0.5 --no-deps 2>&1
    8. pip3 install requests==2.26.0 --no-deps 2>&1
    9. pip3 install packaging --no-deps 2>&1
    10.set PATH=F:\gitP\tensorflow\tensorflow\..\tools;%path%
    11.set PATH=F:\gitP\tensorflow\tensorflow\..\tools\msys64\usr\bin;%path%
    12.yes "" 2>nul | python ./configure.py 2>&1
    13.cd F:\gitP\tensorflow\tensorflow
    14.set BAZEL_VC=C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\VC
    15.set BAZEL_VC_FULL_VERSION=14.29.30133
    16.set PATH=F:\gitP\tensorflow\tensorflow\..\tools;%path%
    17.set PATH=F:\gitP\tensorflow\tensorflow\..\tools\msys64\usr\bin;%path%
    18.bazel --output_user_root F:\bazelTemp build --jobs 8 --config=opt --subcommands //tensorflow/tools/pip_package:build_pip_package 2>&1
    

    Relevant log output

    Extracting Bazel installation...
    Starting local Bazel server and connecting to it...
    INFO: Options provided by the client:
      Inherited 'common' options: --isatty=0 --terminal_columns=80
    INFO: Reading rc options for 'build' from f:\gitp\tensorflow\tensorflow\.bazelrc:
      Inherited 'common' options: --experimental_repo_remote_exec
    INFO: Options provided by the client:
      'build' options: --python_path=C:/Python39/python.exe
    INFO: Reading rc options for 'build' from f:\gitp\tensorflow\tensorflow\.bazelrc:
      'build' options: --define framework_shared_object=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true --experimental_ui_max_stdouterr_bytes=-1 --experimental_cc_shared_library --experimental_link_static_libraries_once=true
    INFO: Reading rc options for 'build' from f:\gitp\tensorflow\tensorflow\.tf_configure.bazelrc:
      'build' options: --action_env PYTHON_BIN_PATH=C:/Python39/python.exe --action_env PYTHON_LIB_PATH=C:/Python39/lib/site-packages --python_path=C:/Python39/python.exe --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions --define=override_eigen_strong_inline=true
    INFO: Reading rc options for 'build' from f:\gitp\tensorflow\tensorflow\.bazelrc:
      'build' options: --deleted_packages=tensorflow/compiler/mlir/tfrt,tensorflow/compiler/mlir/tfrt/benchmarks,tensorflow/compiler/mlir/tfrt/jit/python_binding,tensorflow/compiler/mlir/tfrt/jit/transforms,tensorflow/compiler/mlir/tfrt/python_tests,tensorflow/compiler/mlir/tfrt/tests,tensorflow/compiler/mlir/tfrt/tests/ir,tensorflow/compiler/mlir/tfrt/tests/analysis,tensorflow/compiler/mlir/tfrt/tests/jit,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_tfrt,tensorflow/compiler/mlir/tfrt/tests/lhlo_to_jitrt,tensorflow/compiler/mlir/tfrt/tests/tf_to_corert,tensorflow/compiler/mlir/tfrt/tests/tf_to_tfrt_data,tensorflow/compiler/mlir/tfrt/tests/saved_model,tensorflow/compiler/mlir/tfrt/transforms/lhlo_gpu_to_tfrt_gpu,tensorflow/core/runtime_fallback,tensorflow/core/runtime_fallback/conversion,tensorflow/core/runtime_fallback/kernel,tensorflow/core/runtime_fallback/opdefs,tensorflow/core/runtime_fallback/runtime,tensorflow/core/runtime_fallback/util,tensorflow/core/tfrt/common,tensorflow/core/tfrt/eager,tensorflow/core/tfrt/eager/backends/cpu,tensorflow/core/tfrt/eager/backends/gpu,tensorflow/core/tfrt/eager/core_runtime,tensorflow/core/tfrt/eager/cpp_tests/core_runtime,tensorflow/core/tfrt/gpu,tensorflow/core/tfrt/run_handler_thread_pool,tensorflow/core/tfrt/runtime,tensorflow/core/tfrt/saved_model,tensorflow/core/tfrt/graph_executor,tensorflow/core/tfrt/saved_model/tests,tensorflow/core/tfrt/tpu,tensorflow/core/tfrt/utils
    INFO: Found applicable config definition build:short_logs in file f:\gitp\tensorflow\tensorflow\.bazelrc: --output_filter=DONT_MATCH_ANYTHING
    INFO: Found applicable config definition build:v2 in file f:\gitp\tensorflow\tensorflow\.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
    INFO: Found applicable config definition build:opt in file f:\gitp\tensorflow\tensorflow\.tf_configure.bazelrc: --copt=/arch:AVX --host_copt=/arch:AVX
    INFO: Found applicable config definition build:windows in file f:\gitp\tensorflow\tensorflow\.bazelrc: --copt=/W0 --host_copt=/W0 --copt=/Zc:__cplusplus --host_copt=/Zc:__cplusplus --copt=/D_USE_MATH_DEFINES --host_copt=/D_USE_MATH_DEFINES --features=compiler_param_file --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions --cxxopt=/std:c++17 --host_cxxopt=/std:c++17 --config=monolithic --copt=-DWIN32_LEAN_AND_MEAN --host_copt=-DWIN32_LEAN_AND_MEAN --copt=-DNOGDI --host_copt=-DNOGDI --copt=/experimental:preprocessor --host_copt=/experimental:preprocessor --linkopt=/DEBUG --host_linkopt=/DEBUG --linkopt=/OPT:REF --host_linkopt=/OPT:REF --linkopt=/OPT:ICF --host_linkopt=/OPT:ICF --verbose_failures --features=compiler_param_file --distinct_host_configuration=false
    INFO: Found applicable config definition build:monolithic in file f:\gitp\tensorflow\tensorflow\.bazelrc: --define framework_shared_object=false --experimental_link_static_libraries_once=false
    Loading: 
    Loading: 0 packages loaded
    Loading: 0 packages loaded
    Loading: 0 packages loaded
    Loading: 0 packages loaded
    Loading: 0 packages loaded
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/5c2c5ceec2c0019a4f011538b7fb023f771f1a82.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
    
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

May 31, 2022
An open-source, low-code machine learning library in Python
An open-source, low-code machine learning library in Python

An open-source, low-code machine learning library in Python ?? Version 2.3.6 out now! Check out the release notes here. Official • Docs • Install • Tu

Sep 14, 2022
Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for real-time gesture recognition.

Gesture Recognition Toolkit (GRT) The Gesture Recognition Toolkit (GRT) is a cross-platform, open-source, C++ machine learning library designed for re

Sep 13, 2022
Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine DSSTNE (pronounced "Destiny") is an open source software library for training and deploying

Sep 6, 2022
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Apr 5, 2022
Parallel programming for everyone.
Parallel programming for everyone.

Tutorial | Examples | Forum Documentation | 简体中文文档 | Contributor Guidelines Overview Taichi (太极) is a parallel programming language for high-performan

Sep 17, 2022
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Sep 21, 2022
CNStream is a streaming framework for building Cambricon machine learning pipelines
CNStream is a streaming framework for building Cambricon  machine learning pipelines

CNStream is a streaming framework for building Cambricon machine learning pipelines

Aug 20, 2022
SecMML: Secure MPC(multi-party computation) Machine Learning Framework
SecMML: Secure MPC(multi-party computation) Machine Learning Framework

SecMML 介绍 SecMML是FudanMPL(Multi-Party Computation + Machine Learning)的一个分支,是用于训练机器学习模型的高效可扩展的安全多方计算(MPC)框架,基于BGW协议实现。此框架可以应用到三个及以上参与方联合训练的场景中。目前,SecMM

Sep 20, 2022
Open standard for machine learning interoperability
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Sep 19, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Sep 15, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Sep 20, 2022
An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit
An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit

DREAMPlaceFPGA An Open-Source Analytical Placer for Large Scale Heterogeneous FPGAs using Deep-Learning Toolkit. This work leverages the open-source A

Sep 15, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

The Microsoft Cognitive Toolkit is a unified deep learning toolkit that describes neural networks as a series of computational steps via a directed graph.

Sep 22, 2022
Distributed machine learning platform

Veles Distributed platform for rapid Deep learning application development Consists of: Platform - https://github.com/Samsung/veles Znicz Plugin - Neu

Sep 22, 2022
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

Website | Documentation | Tutorials | Installation | Release Notes CatBoost is a machine learning method based on gradient boosting over decision tree

Sep 22, 2022
A lightweight C++ machine learning library for embedded electronics and robotics.

Fido Fido is an lightweight, highly modular C++ machine learning library for embedded electronics and robotics. Fido is especially suited for robotic

Sep 5, 2022
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.
High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

Sep 21, 2022
Feature Store for Machine Learning
Feature Store for Machine Learning

Overview Feast (Feature Store) is an operational data system for managing and serving machine learning features to models in production. Please see ou

Sep 22, 2022