Dataset Viewer
Auto-converted to Parquet Duplicate
repo
stringlengths
30
45
file_path
stringlengths
8
50
text
stringlengths
426
66.5k
https://github.com/scikit-learn/scikit-learn
README.md
.. -*- mode: rst -*- |Azure| |Codecov| |CircleCI| |Nightly wheels| |Ruff| |PythonVersion| |PyPi| |DOI| |Benchmark| .. |Azure| image:: https://dev.azure.com/scikit-learn/scikit-learn/_apis/build/status/scikit-learn.scikit-learn?branchName=main :target: https://dev.azure.com/scikit-learn/scikit-learn/_build/latest?definitionId=1&branchName=main .. |CircleCI| image:: https://circleci.com/gh/scikit-learn/scikit-learn/tree/main.svg?style=shield :target: https://circleci.com/gh/scikit-learn/scikit-learn .. |Codecov| image:: https://codecov.io/gh/scikit-learn/scikit-learn/branch/main/graph/badge.svg?token=Pk8G9gg3y9 :target: https://codecov.io/gh/scikit-learn/scikit-learn .. |Nightly wheels| image:: https://github.com/scikit-learn/scikit-learn/actions/workflows/wheels.yml/badge.svg?event=schedule :target: https://github.com/scikit-learn/scikit-learn/actions?query=workflow%3A%22Wheel+builder%22+event%3Aschedule .. |Ruff| image:: https://img.shields.io/badge/code%20style-ruff-000000.svg :target: https://github.com/astral-sh/ruff .. |PythonVersion| image:: https://img.shields.io/pypi/pyversions/scikit-learn.svg :target: https://pypi.org/project/scikit-learn/ .. |PyPi| image:: https://img.shields.io/pypi/v/scikit-learn :target: https://pypi.org/project/scikit-learn .. |DOI| image:: https://zenodo.org/badge/21369/scikit-learn/scikit-learn.svg :target: https://zenodo.org/badge/latestdoi/21369/scikit-learn/scikit-learn .. |Benchmark| image:: https://img.shields.io/badge/Benchmarked%20by-asv-blue :target: https://scikit-learn.org/scikit-learn-benchmarks .. |PythonMinVersion| replace:: 3.10 .. |NumPyMinVersion| replace:: 1.22.0 .. |SciPyMinVersion| replace:: 1.8.0 .. |JoblibMinVersion| replace:: 1.2.0 .. |ThreadpoolctlMinVersion| replace:: 3.1.0 .. |MatplotlibMinVersion| replace:: 3.5.0 .. |Scikit-ImageMinVersion| replace:: 0.19.0 .. |PandasMinVersion| replace:: 1.4.0 .. |SeabornMinVersion| replace:: 0.9.0 .. |PytestMinVersion| replace:: 7.1.2 .. |PlotlyMinVersion| replace:: 5.14.0 .. image:: https://raw.githubusercontent.com/scikit-learn/scikit-learn/main/doc/logos/scikit-learn-logo.png :target: https://scikit-learn.org/ **scikit-learn** is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started in 2007 by David Cournapeau as a Google Summer of Code project, and since then many volunteers have contributed. See the `About us <https://scikit-learn.org/dev/about.html#authors>`__ page for a list of core contributors. It is currently maintained by a team of volunteers. Website: https://scikit-learn.org Installation ------------ Dependencies ~~~~~~~~~~~~ scikit-learn requires: - Python (>= |PythonMinVersion|) - NumPy (>= |NumPyMinVersion|) - SciPy (>= |SciPyMinVersion|) - joblib (>= |JoblibMinVersion|) - threadpoolctl (>= |ThreadpoolctlMinVersion|) ======= Scikit-learn plotting capabilities (i.e., functions start with ``plot_`` and classes end with ``Display``) require Matplotlib (>= |MatplotlibMinVersion|). For running the examples Matplotlib >= |MatplotlibMinVersion| is required. A few examples require scikit-image >= |Scikit-ImageMinVersion|, a few examples require pandas >= |PandasMinVersion|, some examples require seaborn >= |SeabornMinVersion| and plotly >= |PlotlyMinVersion|. User installation ~~~~~~~~~~~~~~~~~ If you already have a working installation of NumPy and SciPy, the easiest way to install scikit-learn is using ``pip``:: pip install -U scikit-learn or ``conda``:: conda install -c conda-forge scikit-learn The documentation includes more detailed `installation instructions <https://scikit-learn.org/stable/install.html>`_. Changelog --------- See the `changelog <https://scikit-learn.org/dev/whats_new.html>`__ for a history of notable changes to scikit-learn. Development ----------- We welcome new contributors of all experience levels. The scikit-learn community goals are to be helpful, welcoming, and effective. The `Development Guide <https://scikit-learn.org/stable/developers/index.html>`_ has detailed information about contributing code, documentation, tests, and more. We've included some basic information in this README. Important links ~~~~~~~~~~~~~~~ - Official source code repo: https://github.com/scikit-learn/scikit-learn - Download releases: https://pypi.org/project/scikit-learn/ - Issue tracker: https://github.com/scikit-learn/scikit-learn/issues Source code ~~~~~~~~~~~ You can check the latest sources with the command:: git clone https://github.com/scikit-learn/scikit-learn.git Contributing ~~~~~~~~~~~~ To learn more about making a contribution to scikit-learn, please see our `Contributing guide <https://scikit-learn.org/dev/developers/contributing.html>`_. Testing ~~~~~~~ After installation, you can launch the test suite from outside the source directory (you will need to have ``pytest`` >= |PyTestMinVersion| installed):: pytest sklearn See the web page https://scikit-learn.org/dev/developers/contributing.html#testing-and-improving-test-coverage for more information. Random number generation can be controlled during testing by setting the ``SKLEARN_SEED`` environment variable. Submitting a Pull Request ~~~~~~~~~~~~~~~~~~~~~~~~~ Before opening a Pull Request, have a look at the full Contributing page to make sure your code complies with our guidelines: https://scikit-learn.org/stable/developers/index.html Project History --------------- The project was started in 2007 by David Cournapeau as a Google Summer of Code project, and since then many volunteers have contributed. See the `About us <https://scikit-learn.org/dev/about.html#authors>`__ page for a list of core contributors. The project is currently maintained by a team of volunteers. **Note**: `scikit-learn` was previously referred to as `scikits.learn`. Help and Support ---------------- Documentation ~~~~~~~~~~~~~ - HTML documentation (stable release): https://scikit-learn.org - HTML documentation (development version): https://scikit-learn.org/dev/ - FAQ: https://scikit-learn.org/stable/faq.html Communication ~~~~~~~~~~~~~ Main Channels ^^^^^^^^^^^^^ - **Website**: https://scikit-learn.org - **Blog**: https://blog.scikit-learn.org - **Mailing list**: https://mail.python.org/mailman/listinfo/scikit-learn Developer & Support ^^^^^^^^^^^^^^^^^^^^^^ - **GitHub Discussions**: https://github.com/scikit-learn/scikit-learn/discussions - **Stack Overflow**: https://stackoverflow.com/questions/tagged/scikit-learn - **Discord**: https://discord.gg/h9qyrK8Jc8 Social Media Platforms ^^^^^^^^^^^^^^^^^^^^^^ - **LinkedIn**: https://www.linkedin.com/company/scikit-learn - **YouTube**: https://www.youtube.com/channel/UCJosFjYm0ZYVUARxuOZqnnw/playlists - **Facebook**: https://www.facebook.com/scikitlearnofficial/ - **Instagram**: https://www.instagram.com/scikitlearnofficial/ - **TikTok**: https://www.tiktok.com/@scikit.learn - **Bluesky**: https://bsky.app/profile/scikit-learn.org - **Mastodon**: https://mastodon.social/@[email protected] Resources ^^^^^^^^^ - **Calendar**: https://blog.scikit-learn.org/calendar/ - **Logos & Branding**: https://github.com/scikit-learn/scikit-learn/tree/main/doc/logos Citation ~~~~~~~~ If you use scikit-learn in a scientific publication, we would appreciate citations: https://scikit-learn.org/stable/about.html#citing-scikit-learn
https://github.com/tensorflow/tensorflow
README.md
<div align="center"> <img src="https://www.tensorflow.org/images/tf_logo_horizontal.png"> </div> [![Python](https://img.shields.io/pypi/pyversions/tensorflow.svg)](https://badge.fury.io/py/tensorflow) [![PyPI](https://badge.fury.io/py/tensorflow.svg)](https://badge.fury.io/py/tensorflow) [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4724125.svg)](https://doi.org/10.5281/zenodo.4724125) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/1486/badge)](https://bestpractices.coreinfrastructure.org/projects/1486) [![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/tensorflow/tensorflow/badge)](https://securityscorecards.dev/viewer/?uri=github.com/tensorflow/tensorflow) [![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/tensorflow.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow) [![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/tensorflow-py.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow-py) [![OSSRank](https://shields.io/endpoint?url=https://ossrank.com/shield/44)](https://ossrank.com/p/44) [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v1.4%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md) **`Documentation`** | ------------------- | [![Documentation](https://img.shields.io/badge/api-reference-blue.svg)](https://www.tensorflow.org/api_docs/) | [TensorFlow](https://www.tensorflow.org/) is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of [tools](https://www.tensorflow.org/resources/tools), [libraries](https://www.tensorflow.org/resources/libraries-extensions), and [community](https://www.tensorflow.org/community) resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications. TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well. TensorFlow provides stable [Python](https://www.tensorflow.org/api_docs/python) and [C++](https://www.tensorflow.org/api_docs/cc) APIs, as well as a non-guaranteed backward compatible API for [other languages](https://www.tensorflow.org/api_docs). Keep up-to-date with release announcements and security updates by subscribing to [[email protected]](https://groups.google.com/a/tensorflow.org/forum/#!forum/announce). See all the [mailing lists](https://www.tensorflow.org/community/forums). ## Install See the [TensorFlow install guide](https://www.tensorflow.org/install) for the [pip package](https://www.tensorflow.org/install/pip), to [enable GPU support](https://www.tensorflow.org/install/gpu), use a [Docker container](https://www.tensorflow.org/install/docker), and [build from source](https://www.tensorflow.org/install/source). To install the current release, which includes support for [CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and Windows)*: ``` $ pip install tensorflow ``` Other devices (DirectX and MacOS-metal) are supported using [Device plugins](https://www.tensorflow.org/install/gpu_plugins#available_devices). A smaller CPU-only package is also available: ``` $ pip install tensorflow-cpu ``` To update TensorFlow to the latest version, add `--upgrade` flag to the above commands. *Nightly binaries are available for testing using the [tf-nightly](https://pypi.python.org/pypi/tf-nightly) and [tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPI.* #### *Try your first TensorFlow program* ```shell $ python ``` ```python >>> import tensorflow as tf >>> tf.add(1, 2).numpy() 3 >>> hello = tf.constant('Hello, TensorFlow!') >>> hello.numpy() b'Hello, TensorFlow!' ``` For more examples, see the [TensorFlow tutorials](https://www.tensorflow.org/tutorials/). ## Contribution guidelines **If you want to contribute to TensorFlow, be sure to review the [contribution guidelines](CONTRIBUTING.md). This project adheres to TensorFlow's [code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.** **We use [GitHub issues](https://github.com/tensorflow/tensorflow/issues) for tracking requests and bugs, please see [TensorFlow Forum](https://discuss.tensorflow.org/) for general questions and discussion, and please direct specific questions to [Stack Overflow](https://stackoverflow.com/questions/tagged/tensorflow).** The TensorFlow project strives to abide by generally accepted best practices in open-source software development. ## Patching guidelines Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities: * Clone the TensorFlow repo and switch to the corresponding branch for your desired TensorFlow version, for example, branch `r2.8` for version 2.8. * Apply (that is, cherry-pick) the desired changes and resolve any code conflicts. * Run TensorFlow tests and ensure they pass. * [Build](https://www.tensorflow.org/install/source) the TensorFlow pip package from source. ## Continuous build status You can find more community-supported platforms and configurations in the [TensorFlow SIG Build community builds table](https://github.com/tensorflow/build#community-supported-tensorflow-builds). ### Official Builds Build Type | Status | Artifacts ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- **Linux CPU** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-cc.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-cc.html) | [PyPI](https://pypi.org/project/tf-nightly/) **Linux GPU** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-gpu-py3.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-gpu-py3.html) | [PyPI](https://pypi.org/project/tf-nightly-gpu/) **Linux XLA** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-xla.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-xla.html) | TBA **macOS** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/macos-py2-cc.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/macos-py2-cc.html) | [PyPI](https://pypi.org/project/tf-nightly/) **Windows CPU** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/windows-cpu.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/windows-cpu.html) | [PyPI](https://pypi.org/project/tf-nightly/) **Windows GPU** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/windows-gpu.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/windows-gpu.html) | [PyPI](https://pypi.org/project/tf-nightly-gpu/) **Android** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/android.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/android.html) | [Download](https://bintray.com/google/tensorflow/tensorflow/_latestVersion) **Raspberry Pi 0 and 1** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/rpi01-py3.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/rpi01-py3.html) | [Py3](https://storage.googleapis.com/tensorflow-nightly/tensorflow-1.10.0-cp34-none-linux_armv6l.whl) **Raspberry Pi 2 and 3** | [![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/rpi23-py3.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/rpi23-py3.html) | [Py3](https://storage.googleapis.com/tensorflow-nightly/tensorflow-1.10.0-cp34-none-linux_armv7l.whl) **Libtensorflow MacOS CPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/macos/latest/macos_cpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/) **Libtensorflow Linux CPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/ubuntu_16/latest/cpu/ubuntu_cpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/) **Libtensorflow Linux GPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/ubuntu_16/latest/gpu/ubuntu_gpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/) **Libtensorflow Windows CPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/windows/latest/cpu/windows_cpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/) **Libtensorflow Windows GPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/windows/latest/gpu/windows_gpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/) ## Resources * [TensorFlow.org](https://www.tensorflow.org) * [TensorFlow Tutorials](https://www.tensorflow.org/tutorials/) * [TensorFlow Official Models](https://github.com/tensorflow/models/tree/master/official) * [TensorFlow Examples](https://github.com/tensorflow/examples) * [TensorFlow Codelabs](https://codelabs.developers.google.com/?cat=TensorFlow) * [TensorFlow Blog](https://blog.tensorflow.org) * [Learn ML with TensorFlow](https://www.tensorflow.org/resources/learn-ml) * [TensorFlow Twitter](https://twitter.com/tensorflow) * [TensorFlow YouTube](https://www.youtube.com/channel/UC0rqucBdTuFTjJiefW5t-IQ) * [TensorFlow model optimization roadmap](https://www.tensorflow.org/model_optimization/guide/roadmap) * [TensorFlow White Papers](https://www.tensorflow.org/about/bib) * [TensorBoard Visualization Toolkit](https://github.com/tensorflow/tensorboard) * [TensorFlow Code Search](https://cs.opensource.google/tensorflow/tensorflow) Learn more about the [TensorFlow community](https://www.tensorflow.org/community) and how to [contribute](https://www.tensorflow.org/community/contribute). ## Courses * [Coursera](https://www.coursera.org/search?query=TensorFlow) * [Udacity](https://www.udacity.com/courses/all?search=TensorFlow) * [Edx](https://www.edx.org/search?q=TensorFlow) ## License [Apache License 2.0](LICENSE)
https://github.com/tensorflow/tensorflow
configure.py
# Copyright 2017 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """configure script to get build parameters from user.""" import argparse import errno import json import os import platform import re import shutil import subprocess import sys _DEFAULT_CUDA_COMPUTE_CAPABILITIES = '3.5,7.0' _SUPPORTED_ANDROID_NDK_VERSIONS = [19, 20, 21, 25] _DEFAULT_PROMPT_ASK_ATTEMPTS = 10 _TF_BAZELRC_FILENAME = '.tf_configure.bazelrc' _TF_WORKSPACE_ROOT = '' _TF_BAZELRC = '' _TF_CURRENT_BAZEL_VERSION = None NCCL_LIB_PATHS = [ 'lib64/', 'lib/powerpc64le-linux-gnu/', 'lib/x86_64-linux-gnu/', '' ] # List of files to configure when building Bazel on Apple platforms. APPLE_BAZEL_FILES = [ 'tensorflow/lite/ios/BUILD', 'tensorflow/lite/objc/BUILD', 'tensorflow/lite/swift/BUILD', 'tensorflow/lite/tools/benchmark/experimental/ios/BUILD' ] # List of files to move when building for iOS. IOS_FILES = [ 'tensorflow/lite/objc/TensorFlowLiteObjC.podspec', 'tensorflow/lite/swift/TensorFlowLiteSwift.podspec', ] class UserInputError(Exception): pass def is_windows(): return platform.system() == 'Windows' def is_linux(): return platform.system() == 'Linux' def is_macos(): return platform.system() == 'Darwin' def is_ppc64le(): return platform.machine() == 'ppc64le' def is_s390x(): return platform.machine() == 's390x' def is_cygwin(): return platform.system().startswith('CYGWIN_NT') def get_input(question): try: try: answer = raw_input(question) except NameError: answer = input(question) # pylint: disable=bad-builtin except EOFError: answer = '' return answer def symlink_force(target, link_name): """Force symlink, equivalent of 'ln -sf'. Args: target: items to link to. link_name: name of the link. """ try: os.symlink(target, link_name) except OSError as e: if e.errno == errno.EEXIST: os.remove(link_name) os.symlink(target, link_name) else: raise e def write_to_bazelrc(line): with open(_TF_BAZELRC, 'a') as f: f.write(line + '\n') def write_action_env_to_bazelrc(var_name, var): write_to_bazelrc('build --action_env {}="{}"'.format(var_name, str(var))) def write_repo_env_to_bazelrc(config_name, var_name, var): write_to_bazelrc( 'build:{} --repo_env {}="{}"'.format(config_name, var_name, str(var)) ) def run_shell(cmd, allow_non_zero=False, stderr=None): if stderr is None: stderr = sys.stdout if allow_non_zero: try: output = subprocess.check_output(cmd, stderr=stderr) except subprocess.CalledProcessError as e: output = e.output else: output = subprocess.check_output(cmd, stderr=stderr) return output.decode('UTF-8').strip() def cygpath(path): """Convert path from posix to windows.""" return os.path.abspath(path).replace('\\', '/') def get_python_path(environ_cp, python_bin_path): """Get the python site package paths.""" python_paths = [] if environ_cp.get('PYTHONPATH'): python_paths = environ_cp.get('PYTHONPATH').split(':') try: stderr = open(os.devnull, 'wb') library_paths = run_shell([ python_bin_path, '-c', 'import site; print("\\n".join(site.getsitepackages()))' ], stderr=stderr).split('\n') except subprocess.CalledProcessError: library_paths = [ run_shell([ python_bin_path, '-c', 'import sysconfig;print(sysconfig.get_path("purelib")', ]) ] all_paths = set(python_paths + library_paths) # Sort set so order is deterministic all_paths = sorted(all_paths) paths = [] for path in all_paths: if os.path.isdir(path): paths.append(path) return paths def get_python_major_version(python_bin_path): """Get the python major version.""" return run_shell([python_bin_path, '-c', 'import sys; print(sys.version[0])']) def setup_python(environ_cp): """Setup python related env variables.""" # Get PYTHON_BIN_PATH, default is the current running python. default_python_bin_path = sys.executable ask_python_bin_path = ('Please specify the location of python. [Default is ' '{}]: ').format(default_python_bin_path) while True: python_bin_path = get_from_env_or_user_or_default(environ_cp, 'PYTHON_BIN_PATH', ask_python_bin_path, default_python_bin_path) # Check if the path is valid if os.path.isfile(python_bin_path) and os.access(python_bin_path, os.X_OK): break elif not os.path.exists(python_bin_path): print('Invalid python path: {} cannot be found.'.format(python_bin_path)) else: print('{} is not executable. Is it the python binary?'.format( python_bin_path)) environ_cp['PYTHON_BIN_PATH'] = '' # Convert python path to Windows style before checking lib and version if is_windows() or is_cygwin(): python_bin_path = cygpath(python_bin_path) # Get PYTHON_LIB_PATH python_lib_path = environ_cp.get('PYTHON_LIB_PATH') if not python_lib_path: python_lib_paths = get_python_path(environ_cp, python_bin_path) if environ_cp.get('USE_DEFAULT_PYTHON_LIB_PATH') == '1': python_lib_path = python_lib_paths[0] else: print('Found possible Python library paths:\n %s' % '\n '.join(python_lib_paths)) default_python_lib_path = python_lib_paths[0] python_lib_path = get_input( 'Please input the desired Python library path to use. ' 'Default is [{}]\n'.format(python_lib_paths[0])) if not python_lib_path: python_lib_path = default_python_lib_path environ_cp['PYTHON_LIB_PATH'] = python_lib_path python_major_version = get_python_major_version(python_bin_path) if python_major_version == '2': write_to_bazelrc('build --host_force_python=PY2') # Convert python path to Windows style before writing into bazel.rc if is_windows() or is_cygwin(): python_lib_path = cygpath(python_lib_path) # Set-up env variables used by python_configure.bzl write_action_env_to_bazelrc('PYTHON_BIN_PATH', python_bin_path) write_action_env_to_bazelrc('PYTHON_LIB_PATH', python_lib_path) write_to_bazelrc('build --python_path=\"{}"'.format(python_bin_path)) environ_cp['PYTHON_BIN_PATH'] = python_bin_path # If chosen python_lib_path is from a path specified in the PYTHONPATH # variable, need to tell bazel to include PYTHONPATH if environ_cp.get('PYTHONPATH'): python_paths = environ_cp.get('PYTHONPATH').split(':') if python_lib_path in python_paths: write_action_env_to_bazelrc('PYTHONPATH', environ_cp.get('PYTHONPATH')) # Write tools/python_bin_path.sh with open( os.path.join(_TF_WORKSPACE_ROOT, 'tools', 'python_bin_path.sh'), 'w') as f: f.write('export PYTHON_BIN_PATH="{}"'.format(python_bin_path)) def reset_tf_configure_bazelrc(): """Reset file that contains customized config settings.""" open(_TF_BAZELRC, 'w').close() def cleanup_makefile(): """Delete any leftover BUILD files from the Makefile build. These files could interfere with Bazel parsing. """ makefile_download_dir = os.path.join(_TF_WORKSPACE_ROOT, 'tensorflow', 'contrib', 'makefile', 'downloads') if os.path.isdir(makefile_download_dir): for root, _, filenames in os.walk(makefile_download_dir): for f in filenames: if f.endswith('BUILD'): os.remove(os.path.join(root, f)) def get_var(environ_cp, var_name, query_item, enabled_by_default, question=None, yes_reply=None, no_reply=None): """Get boolean input from user. If var_name is not set in env, ask user to enable query_item or not. If the response is empty, use the default. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled. no_reply: optional string for reply when feature is disabled. Returns: boolean value of the variable. Raises: UserInputError: if an environment variable is set, but it cannot be interpreted as a boolean indicator, assume that the user has made a scripting error, and will continue to provide invalid input. Raise the error to avoid infinitely looping. """ if not question: question = 'Do you wish to build TensorFlow with {} support?'.format( query_item) if not yes_reply: yes_reply = '{} support will be enabled for TensorFlow.'.format(query_item) if not no_reply: no_reply = 'No {}'.format(yes_reply) yes_reply += '\n' no_reply += '\n' if enabled_by_default: question += ' [Y/n]: ' else: question += ' [y/N]: ' var = environ_cp.get(var_name) if var is not None: var_content = var.strip().lower() true_strings = ('1', 't', 'true', 'y', 'yes') false_strings = ('0', 'f', 'false', 'n', 'no') if var_content in true_strings: var = True elif var_content in false_strings: var = False else: raise UserInputError( 'Environment variable %s must be set as a boolean indicator.\n' 'The following are accepted as TRUE : %s.\n' 'The following are accepted as FALSE: %s.\n' 'Current value is %s.' % (var_name, ', '.join(true_strings), ', '.join(false_strings), var)) while var is None: user_input_origin = get_input(question) user_input = user_input_origin.strip().lower() if user_input == 'y': print(yes_reply) var = True elif user_input == 'n': print(no_reply) var = False elif not user_input: if enabled_by_default: print(yes_reply) var = True else: print(no_reply) var = False else: print('Invalid selection: {}'.format(user_input_origin)) return var def set_action_env_var(environ_cp, var_name, query_item, enabled_by_default, question=None, yes_reply=None, no_reply=None, bazel_config_name=None): """Set boolean action_env variable. Ask user if query_item will be enabled. Default is used if no input is given. Set environment variable and write to .bazelrc. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". query_item: string for feature related to the variable, e.g. "CUDA for Nvidia GPUs". enabled_by_default: boolean for default behavior. question: optional string for how to ask for user input. yes_reply: optional string for reply when feature is enabled. no_reply: optional string for reply when feature is disabled. bazel_config_name: adding config to .bazelrc instead of action_env. """ var = int( get_var(environ_cp, var_name, query_item, enabled_by_default, question, yes_reply, no_reply)) if not bazel_config_name: write_action_env_to_bazelrc(var_name, var) elif var: write_to_bazelrc('build --config=%s' % bazel_config_name) environ_cp[var_name] = str(var) def convert_version_to_int(version): """Convert a version number to a integer that can be used to compare. Version strings of the form X.YZ and X.Y.Z-xxxxx are supported. The 'xxxxx' part, for instance 'homebrew' on OS/X, is ignored. Args: version: a version to be converted Returns: An integer if converted successfully, otherwise return None. """ version = version.split('-')[0] version_segments = version.split('.') # Treat "0.24" as "0.24.0" if len(version_segments) == 2: version_segments.append('0') for seg in version_segments: if not seg.isdigit(): return None version_str = ''.join(['%03d' % int(seg) for seg in version_segments]) return int(version_str) def retrieve_bazel_version(): """Retrieve installed bazel version (or bazelisk). Returns: The bazel version detected. """ bazel_executable = shutil.which('bazel') if bazel_executable is None: bazel_executable = shutil.which('bazelisk') if bazel_executable is None: print('Cannot find bazel. Please install bazel/bazelisk.') sys.exit(1) stderr = open(os.devnull, 'wb') curr_version = run_shell([bazel_executable, '--version'], allow_non_zero=True, stderr=stderr) if curr_version.startswith('bazel '): curr_version = curr_version.split('bazel ')[1] curr_version_int = convert_version_to_int(curr_version) # Check if current bazel version can be detected properly. if not curr_version_int: print('WARNING: current bazel installation is not a release version.') return curr_version print('You have bazel %s installed.' % curr_version) return curr_version def set_cc_opt_flags(environ_cp): """Set up architecture-dependent optimization flags. Also append CC optimization flags to bazel.rc.. Args: environ_cp: copy of the os.environ. """ if is_ppc64le(): # gcc on ppc64le does not support -march, use mcpu instead default_cc_opt_flags = '-mcpu=native' elif is_windows(): default_cc_opt_flags = '/arch:AVX' else: # On all other platforms, no longer use `-march=native` as this can result # in instructions that are too modern being generated. Users that want # maximum performance should compile TF in their environment and can pass # `-march=native` there. # See https://github.com/tensorflow/tensorflow/issues/45744 and duplicates default_cc_opt_flags = '-Wno-sign-compare' question = ('Please specify optimization flags to use during compilation when' ' bazel option "--config=opt" is specified [Default is %s]: ' ) % default_cc_opt_flags cc_opt_flags = get_from_env_or_user_or_default(environ_cp, 'CC_OPT_FLAGS', question, default_cc_opt_flags) for opt in cc_opt_flags.split(): write_to_bazelrc('build:opt --copt=%s' % opt) write_to_bazelrc('build:opt --host_copt=%s' % opt) def set_tf_cuda_clang(environ_cp): """set TF_CUDA_CLANG action_env. Args: environ_cp: copy of the os.environ. """ question = 'Do you want to use clang as CUDA compiler?' yes_reply = 'Clang will be used as CUDA compiler.' no_reply = 'nvcc will be used as CUDA compiler.' set_action_env_var( environ_cp, 'TF_CUDA_CLANG', None, True, question=question, yes_reply=yes_reply, no_reply=no_reply, bazel_config_name='cuda_clang', ) def set_tf_download_clang(environ_cp): """Set TF_DOWNLOAD_CLANG action_env.""" question = 'Do you wish to download a fresh release of clang? (Experimental)' yes_reply = 'Clang will be downloaded and used to compile tensorflow.' no_reply = 'Clang will not be downloaded.' set_action_env_var( environ_cp, 'TF_DOWNLOAD_CLANG', None, False, question=question, yes_reply=yes_reply, no_reply=no_reply, bazel_config_name='download_clang') def get_from_env_or_user_or_default(environ_cp, var_name, ask_for_var, var_default): """Get var_name either from env, or user or default. If var_name has been set as environment variable, use the preset value, else ask for user input. If no input is provided, the default is used. Args: environ_cp: copy of the os.environ. var_name: string for name of environment variable, e.g. "TF_NEED_CUDA". ask_for_var: string for how to ask for user input. var_default: default value string. Returns: string value for var_name """ var = environ_cp.get(var_name) # an intentionally empty value in the # environment is not the same as no value if var is None: var = get_input(ask_for_var) print('\n') if not var: var = var_default return var def prompt_loop_or_load_from_env(environ_cp, var_name, var_default, ask_for_var, check_success, error_msg, suppress_default_error=False, resolve_symlinks=False, n_ask_attempts=_DEFAULT_PROMPT_ASK_ATTEMPTS): """Loop over user prompts for an ENV param until receiving a valid response. For the env param var_name, read from the environment or verify user input until receiving valid input. When done, set var_name in the environ_cp to its new value. Args: environ_cp: (Dict) copy of the os.environ. var_name: (String) string for name of environment variable, e.g. "TF_MYVAR". var_default: (String) default value string. ask_for_var: (String) string for how to ask for user input. check_success: (Function) function that takes one argument and returns a boolean. Should return True if the value provided is considered valid. May contain a complex error message if error_msg does not provide enough information. In that case, set suppress_default_error to True. error_msg: (String) String with one and only one '%s'. Formatted with each invalid response upon check_success(input) failure. suppress_default_error: (Bool) Suppress the above error message in favor of one from the check_success function. resolve_symlinks: (Bool) Translate symbolic links into the real filepath. n_ask_attempts: (Integer) Number of times to query for valid input before raising an error and quitting. Returns: [String] The value of var_name after querying for input. Raises: UserInputError: if a query has been attempted n_ask_attempts times without success, assume that the user has made a scripting error, and will continue to provide invalid input. Raise the error to avoid infinitely looping. """ default = environ_cp.get(var_name) or var_default full_query = '%s [Default is %s]: ' % ( ask_for_var, default, ) for _ in range(n_ask_attempts): val = get_from_env_or_user_or_default(environ_cp, var_name, full_query, default) if check_success(val): break if not suppress_default_error: print(error_msg % val) environ_cp[var_name] = '' else: raise UserInputError('Invalid %s setting was provided %d times in a row. ' 'Assuming to be a scripting mistake.' % (var_name, n_ask_attempts)) if resolve_symlinks: val = os.path.realpath(val) environ_cp[var_name] = val return val def set_clang_cuda_compiler_path(environ_cp): """Set CLANG_CUDA_COMPILER_PATH.""" # Upon clang 19 drop the check for 16 default_clang_path = '/usr/lib/llvm-18/bin/clang' if not os.path.exists(default_clang_path): default_clang_path = '/usr/lib/llvm-17/bin/clang' if not os.path.exists(default_clang_path): default_clang_path = '/usr/lib/llvm-16/bin/clang' if not os.path.exists(default_clang_path): default_clang_path = shutil.which('clang') or '' clang_cuda_compiler_path = prompt_loop_or_load_from_env( environ_cp, var_name='CLANG_CUDA_COMPILER_PATH', var_default=default_clang_path, ask_for_var='Please specify clang path that to be used as host compiler.', check_success=os.path.exists, resolve_symlinks=True, error_msg='Invalid clang path. %s cannot be found.', ) # Set CLANG_CUDA_COMPILER_PATH environ_cp['CLANG_CUDA_COMPILER_PATH'] = clang_cuda_compiler_path write_action_env_to_bazelrc('CLANG_CUDA_COMPILER_PATH', clang_cuda_compiler_path) return clang_cuda_compiler_path def create_android_ndk_rule(environ_cp): """Set ANDROID_NDK_HOME and write Android NDK WORKSPACE rule.""" if is_windows() or is_cygwin(): default_ndk_path = cygpath('%s/Android/Sdk/ndk-bundle' % environ_cp['APPDATA']) elif is_macos(): default_ndk_path = '%s/library/Android/Sdk/ndk-bundle' % environ_cp['HOME'] else: default_ndk_path = '%s/Android/Sdk/ndk-bundle' % environ_cp['HOME'] def valid_ndk_path(path): return (os.path.exists(path) and os.path.exists(os.path.join(path, 'source.properties'))) android_ndk_home_path = prompt_loop_or_load_from_env( environ_cp, var_name='ANDROID_NDK_HOME', var_default=default_ndk_path, ask_for_var='Please specify the home path of the Android NDK to use.', check_success=valid_ndk_path, error_msg=('The path %s or its child file "source.properties" ' 'does not exist.')) write_action_env_to_bazelrc('ANDROID_NDK_HOME', android_ndk_home_path) write_action_env_to_bazelrc( 'ANDROID_NDK_API_LEVEL', get_ndk_api_level(environ_cp, android_ndk_home_path)) def create_android_sdk_rule(environ_cp): """Set Android variables and write Android SDK WORKSPACE rule.""" if is_windows() or is_cygwin(): default_sdk_path = cygpath('%s/Android/Sdk' % environ_cp['APPDATA']) elif is_macos(): default_sdk_path = '%s/library/Android/Sdk' % environ_cp['HOME'] else: default_sdk_path = '%s/Android/Sdk' % environ_cp['HOME'] def valid_sdk_path(path): return (os.path.exists(path) and os.path.exists(os.path.join(path, 'platforms')) and os.path.exists(os.path.join(path, 'build-tools'))) android_sdk_home_path = prompt_loop_or_load_from_env( environ_cp, var_name='ANDROID_SDK_HOME', var_default=default_sdk_path, ask_for_var='Please specify the home path of the Android SDK to use.', check_success=valid_sdk_path, error_msg=('Either %s does not exist, or it does not contain the ' 'subdirectories "platforms" and "build-tools".')) platforms = os.path.join(android_sdk_home_path, 'platforms') api_levels = sorted(os.listdir(platforms)) api_levels = [x.replace('android-', '') for x in api_levels] def valid_api_level(api_level): return os.path.exists( os.path.join(android_sdk_home_path, 'platforms', 'android-' + api_level)) android_api_level = prompt_loop_or_load_from_env( environ_cp, var_name='ANDROID_API_LEVEL', var_default=api_levels[-1], ask_for_var=('Please specify the Android SDK API level to use. ' '[Available levels: %s]') % api_levels, check_success=valid_api_level, error_msg='Android-%s is not present in the SDK path.') build_tools = os.path.join(android_sdk_home_path, 'build-tools') versions = sorted(os.listdir(build_tools)) def valid_build_tools(version): return os.path.exists( os.path.join(android_sdk_home_path, 'build-tools', version)) android_build_tools_version = prompt_loop_or_load_from_env( environ_cp, var_name='ANDROID_BUILD_TOOLS_VERSION', var_default=versions[-1], ask_for_var=('Please specify an Android build tools version to use. ' '[Available versions: %s]') % versions, check_success=valid_build_tools, error_msg=('The selected SDK does not have build-tools version %s ' 'available.')) write_action_env_to_bazelrc('ANDROID_BUILD_TOOLS_VERSION', android_build_tools_version) write_action_env_to_bazelrc('ANDROID_SDK_API_LEVEL', android_api_level) write_action_env_to_bazelrc('ANDROID_SDK_HOME', android_sdk_home_path) def get_ndk_api_level(environ_cp, android_ndk_home_path): """Gets the appropriate NDK API level to use for the provided Android NDK path. """ # First check to see if we're using a blessed version of the NDK. properties_path = '%s/source.properties' % android_ndk_home_path if is_windows() or is_cygwin(): properties_path = cygpath(properties_path) with open(properties_path, 'r') as f: filedata = f.read() revision = re.search(r'Pkg.Revision = (\d+)', filedata) if revision: ndk_version = revision.group(1) else: raise Exception('Unable to parse NDK revision.') if int(ndk_version) not in _SUPPORTED_ANDROID_NDK_VERSIONS: print('WARNING: The NDK version in %s is %s, which is not ' 'supported by Bazel (officially supported versions: %s). Please use ' 'another version. Compiling Android targets may result in confusing ' 'errors.\n' % (android_ndk_home_path, ndk_version, _SUPPORTED_ANDROID_NDK_VERSIONS)) write_action_env_to_bazelrc('ANDROID_NDK_VERSION', ndk_version) # Now grab the NDK API level to use. Note that this is different from the # SDK API level, as the NDK API level is effectively the *min* target SDK # version. meta = open(os.path.join(android_ndk_home_path, 'meta/platforms.json')) platforms = json.load(meta) meta.close() aliases = platforms['aliases'] api_levels = sorted(list(set([aliases[i] for i in aliases]))) android_ndk_api_level = prompt_loop_or_load_from_env( environ_cp, var_name='ANDROID_NDK_API_LEVEL', var_default='21', # 21 is required for ARM64 support. ask_for_var=( 'Please specify the (min) Android NDK API level to use. ' '[Available levels: %s]' ) % api_levels, check_success=(lambda *_: True), error_msg='Android-%s is not present in the NDK path.', ) return android_ndk_api_level def set_gcc_host_compiler_path(environ_cp): """Set GCC_HOST_COMPILER_PATH.""" default_gcc_host_compiler_path = shutil.which('gcc') or '' gcc_host_compiler_path = prompt_loop_or_load_from_env( environ_cp, var_name='GCC_HOST_COMPILER_PATH', var_default=default_gcc_host_compiler_path, ask_for_var='Please specify which gcc should be used by nvcc as the host ' 'compiler.', check_success=os.path.exists, resolve_symlinks=True, error_msg='Invalid gcc path. %s cannot be found.', ) write_action_env_to_bazelrc('GCC_HOST_COMPILER_PATH', gcc_host_compiler_path) def choose_compiler(environ_cp): question = 'Do you want to use Clang to build TensorFlow?' yes_reply = 'Clang will be used to compile TensorFlow.' no_reply = 'GCC will be used to compile TensorFlow.' var = int( get_var( environ_cp, 'TF_NEED_CLANG', None, True, question, yes_reply, no_reply ) ) return var def choose_compiler_Win(environ_cp): question = 'Do you want to use Clang to build TensorFlow?' yes_reply = 'Add "--config=win_clang" to compile TensorFlow with CLANG.' no_reply = 'MSVC will be used to compile TensorFlow.' var = int( get_var( environ_cp, 'TF_NEED_CLANG', None, True, question, yes_reply, no_reply ) ) return var def set_clang_compiler_path(environ_cp): """Set CLANG_COMPILER_PATH and environment variables. Loop over user prompts for clang path until receiving a valid response. Default is used if no input is given. Set CLANG_COMPILER_PATH and write environment variables CC and BAZEL_COMPILER to .bazelrc. Args: environ_cp: (Dict) copy of the os.environ. Returns: string value for clang_compiler_path. """ # Default path if clang-18 is installed by using apt-get install # remove 16 logic upon release of 19 default_clang_path = '/usr/lib/llvm-18/bin/clang' if not os.path.exists(default_clang_path): default_clang_path = '/usr/lib/llvm-17/bin/clang' if not os.path.exists(default_clang_path): default_clang_path = '/usr/lib/llvm-16/bin/clang' if not os.path.exists(default_clang_path): default_clang_path = shutil.which('clang') or '' clang_compiler_path = prompt_loop_or_load_from_env( environ_cp, var_name='CLANG_COMPILER_PATH', var_default=default_clang_path, ask_for_var='Please specify the path to clang executable.', check_success=os.path.exists, resolve_symlinks=True, error_msg=( 'Invalid clang path. %s cannot be found. Note that TensorFlow now' ' requires clang to compile. You may override this behavior by' ' setting TF_NEED_CLANG=0' ), ) write_action_env_to_bazelrc('CLANG_COMPILER_PATH', clang_compiler_path) write_to_bazelrc('build --repo_env=CC=%s' % clang_compiler_path) write_to_bazelrc('build --repo_env=BAZEL_COMPILER=%s' % clang_compiler_path) return clang_compiler_path def set_clang_compiler_path_win(environ_cp): """Set CLANG_COMPILER_PATH and environment variables. Loop over user prompts for clang path until receiving a valid response. Default is used if no input is given. Set CLANG_COMPILER_PATH and write environment variables CC and BAZEL_COMPILER to .bazelrc. Args: environ_cp: (Dict) copy of the os.environ. Returns: string value for clang_compiler_path. """ # Default path if clang-16 is installed by using apt-get install default_clang_path = 'C:/Program Files/LLVM/bin/clang.exe' if not os.path.exists(default_clang_path): default_clang_path = shutil.which('clang') or '' clang_compiler_path = prompt_loop_or_load_from_env( environ_cp, var_name='CLANG_COMPILER_PATH', var_default=default_clang_path, ask_for_var='Please specify the path to clang executable.', check_success=os.path.exists, resolve_symlinks=True, error_msg=( 'Invalid clang path. %s cannot be found. Note that Clang is now' 'preferred compiler. You may use MSVC by removing --config=win_clang' ), ) write_action_env_to_bazelrc('CLANG_COMPILER_PATH', clang_compiler_path) write_to_bazelrc(f'build --repo_env=CC="{clang_compiler_path}"') write_to_bazelrc(f'build --repo_env=BAZEL_COMPILER="{clang_compiler_path}"') return clang_compiler_path def retrieve_clang_version(clang_executable): """Retrieve installed clang version. Args: clang_executable: (String) path to clang executable Returns: The clang version detected. """ stderr = open(os.devnull, 'wb') curr_version = run_shell([clang_executable, '--version'], allow_non_zero=True, stderr=stderr) curr_version_split = curr_version.lower().split('clang version ') if len(curr_version_split) > 1: curr_version = curr_version_split[1].split()[0].split('git') if len(curr_version) > 1: print('WARNING: current clang installation is not a release version.\n') curr_version = curr_version[0] curr_version_int = convert_version_to_int(curr_version) # Check if current clang version can be detected properly. if not curr_version_int: print('WARNING: current clang installation version unknown.\n') return None print('You have Clang %s installed.\n' % curr_version) return curr_version # Disable clang extension that rejects type definitions within offsetof. # This was added in clang-16 by https://reviews.llvm.org/D133574. # Still required for clang-17. # Can be removed once upb is updated, since a type definition is used within # offset of in the current version of ubp. See # https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183. def disable_clang_offsetof_extension(clang_version): if int(clang_version.split('.')[0]) in (16, 17): write_to_bazelrc('build --copt=-Wno-gnu-offsetof-extensions') def set_hermetic_cuda_version(environ_cp): """Set HERMETIC_CUDA_VERSION.""" ask_cuda_version = ( 'Please specify the hermetic CUDA version you want to use ' 'or leave empty to use the default version. ' ) hermetic_cuda_version = get_from_env_or_user_or_default( environ_cp, 'HERMETIC_CUDA_VERSION', ask_cuda_version, None ) if hermetic_cuda_version: environ_cp['HERMETIC_CUDA_VERSION'] = hermetic_cuda_version write_repo_env_to_bazelrc( 'cuda', 'HERMETIC_CUDA_VERSION', hermetic_cuda_version ) def set_hermetic_cudnn_version(environ_cp): """Set HERMETIC_CUDNN_VERSION.""" ask_cudnn_version = ( 'Please specify the hermetic cuDNN version you want to use ' 'or leave empty to use the default version. ' ) hermetic_cudnn_version = get_from_env_or_user_or_default( environ_cp, 'HERMETIC_CUDNN_VERSION', ask_cudnn_version, None ) if hermetic_cudnn_version: environ_cp['HERMETIC_CUDNN_VERSION'] = hermetic_cudnn_version write_repo_env_to_bazelrc( 'cuda', 'HERMETIC_CUDNN_VERSION', hermetic_cudnn_version ) def set_hermetic_cuda_compute_capabilities(environ_cp): """Set HERMETIC_CUDA_COMPUTE_CAPABILITIES.""" while True: default_cuda_compute_capabilities = _DEFAULT_CUDA_COMPUTE_CAPABILITIES ask_cuda_compute_capabilities = ( 'Please specify a list of comma-separated CUDA compute capabilities ' 'you want to build with.\nYou can find the compute capability of your ' 'device at: https://developer.nvidia.com/cuda-gpus. Each capability ' 'can be specified as "x.y" or "compute_xy" to include both virtual and' ' binary GPU code, or as "sm_xy" to only include the binary ' 'code.\nPlease note that each additional compute capability ' 'significantly increases your build time and binary size, and that ' 'TensorFlow only supports compute capabilities >= 3.5 [Default is: ' '%s]: ' % default_cuda_compute_capabilities) hermetic_cuda_compute_capabilities = get_from_env_or_user_or_default( environ_cp, 'HERMETIC_CUDA_COMPUTE_CAPABILITIES', ask_cuda_compute_capabilities, default_cuda_compute_capabilities, ) # Check whether all capabilities from the input is valid all_valid = True # Remove all whitespace characters before splitting the string # that users may insert by accident, as this will result in error hermetic_cuda_compute_capabilities = ''.join( hermetic_cuda_compute_capabilities.split() ) for compute_capability in hermetic_cuda_compute_capabilities.split(','): m = re.match('[0-9]+.[0-9]+', compute_capability) if not m: # We now support sm_35,sm_50,sm_60,compute_70. sm_compute_match = re.match('(sm|compute)_?([0-9]+[0-9]+)', compute_capability) if not sm_compute_match: print('Invalid compute capability: %s' % compute_capability) all_valid = False else: ver = int(sm_compute_match.group(2)) if ver < 30: print( 'ERROR: TensorFlow only supports small CUDA compute' ' capabilities of sm_30 and higher. Please re-specify the list' ' of compute capabilities excluding version %s.' % ver) all_valid = False if ver < 35: print('WARNING: XLA does not support CUDA compute capabilities ' 'lower than sm_35. Disable XLA when running on older GPUs.') else: ver = float(m.group(0)) if ver < 3.0: print('ERROR: TensorFlow only supports CUDA compute capabilities 3.0 ' 'and higher. Please re-specify the list of compute ' 'capabilities excluding version %s.' % ver) all_valid = False if ver < 3.5: print('WARNING: XLA does not support CUDA compute capabilities ' 'lower than 3.5. Disable XLA when running on older GPUs.') if all_valid: break # Reset and Retry environ_cp['HERMETIC_CUDA_COMPUTE_CAPABILITIES'] = '' # Set HERMETIC_CUDA_COMPUTE_CAPABILITIES environ_cp['HERMETIC_CUDA_COMPUTE_CAPABILITIES'] = ( hermetic_cuda_compute_capabilities ) write_repo_env_to_bazelrc( 'cuda', 'HERMETIC_CUDA_COMPUTE_CAPABILITIES', hermetic_cuda_compute_capabilities, ) def set_cuda_local_path(environ_cp, dist_name, env_var): ask_path = ( 'Please specify the local {} path you want to use ' 'or leave empty to use the default version. ' ).format(dist_name) local_path = get_from_env_or_user_or_default( environ_cp, env_var, ask_path, None ) if local_path: environ_cp[env_var] = local_path write_repo_env_to_bazelrc('cuda', env_var, local_path) def set_other_cuda_vars(environ_cp): """Set other CUDA related variables.""" # If CUDA is enabled, always use GPU during build and test. if environ_cp.get('TF_CUDA_CLANG') == '1': write_to_bazelrc('build --config=cuda_clang') else: write_to_bazelrc('build --config=cuda') def system_specific_test_config(environ_cp): """Add default build and test flags required for TF tests to bazelrc.""" write_to_bazelrc('test --test_size_filters=small,medium') # Each instance of --test_tag_filters or --build_tag_filters overrides all # previous instances, so we need to build up a complete list and write a # single list of filters for the .bazelrc file. # Filters to use with both --test_tag_filters and --build_tag_filters test_and_build_filters = ['-benchmark-test', '-no_oss', '-oss_excluded'] # Additional filters for --test_tag_filters beyond those in # test_and_build_filters test_only_filters = ['-oss_serial'] if is_windows(): test_and_build_filters += ['-no_windows', '-windows_excluded'] if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or (environ_cp.get('TF_NEED_ROCM', None) == '1')): test_and_build_filters += ['-no_windows_gpu', '-no_gpu'] else: test_and_build_filters.append('-gpu') elif is_macos(): test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded'] elif is_linux(): if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or (environ_cp.get('TF_NEED_ROCM', None) == '1')): test_and_build_filters.append('-no_gpu') write_to_bazelrc('test --test_env=LD_LIBRARY_PATH') else: test_and_build_filters.append('-gpu') # Disable tests with "v1only" tag in "v2" Bazel config, but not in "v1" config write_to_bazelrc('test:v1 --test_tag_filters=%s' % ','.join(test_and_build_filters + test_only_filters)) write_to_bazelrc('test:v1 --build_tag_filters=%s' % ','.join(test_and_build_filters)) write_to_bazelrc( 'test:v2 --test_tag_filters=%s' % ','.join(test_and_build_filters + test_only_filters + ['-v1only'])) write_to_bazelrc('test:v2 --build_tag_filters=%s' % ','.join(test_and_build_filters + ['-v1only'])) def set_system_libs_flag(environ_cp): """Set system libs flags.""" syslibs = environ_cp.get('TF_SYSTEM_LIBS', '') if is_s390x() and 'boringssl' not in syslibs: syslibs = 'boringssl' + (', ' + syslibs if syslibs else '') if syslibs: if ',' in syslibs: syslibs = ','.join(sorted(syslibs.split(','))) else: syslibs = ','.join(sorted(syslibs.split())) write_action_env_to_bazelrc('TF_SYSTEM_LIBS', syslibs) for varname in ('PREFIX', 'PROTOBUF_INCLUDE_PATH'): if varname in environ_cp: write_to_bazelrc('build --define=%s=%s' % (varname, environ_cp[varname])) def set_windows_build_flags(environ_cp): """Set Windows specific build options.""" # First available in VS 16.4. Speeds up Windows compile times by a lot. See # https://groups.google.com/a/tensorflow.org/d/topic/build/SsW98Eo7l3o/discussion # pylint: disable=line-too-long write_to_bazelrc( 'build --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions' ) if get_var( environ_cp, 'TF_OVERRIDE_EIGEN_STRONG_INLINE', 'Eigen strong inline', True, ('Would you like to override eigen strong inline for some C++ ' 'compilation to reduce the compilation time?'), 'Eigen strong inline overridden.', 'Not overriding eigen strong inline, ' 'some compilations could take more than 20 mins.'): # Due to a known MSVC compiler issue # https://github.com/tensorflow/tensorflow/issues/10521 # Overriding eigen strong inline speeds up the compiling of # conv_grad_ops_3d.cc and conv_ops_3d.cc by 20 minutes, # but this also hurts the performance. Let users decide what they want. write_to_bazelrc('build --define=override_eigen_strong_inline=true') def config_info_line(name, help_text): """Helper function to print formatted help text for Bazel config options.""" print('\t--config=%-12s\t# %s' % (name, help_text)) def configure_ios(environ_cp): """Configures TensorFlow for iOS builds.""" if not is_macos(): return if not get_var(environ_cp, 'TF_CONFIGURE_IOS', 'iOS', False): return for filepath in APPLE_BAZEL_FILES: existing_filepath = os.path.join(_TF_WORKSPACE_ROOT, filepath + '.apple') renamed_filepath = os.path.join(_TF_WORKSPACE_ROOT, filepath) symlink_force(existing_filepath, renamed_filepath) for filepath in IOS_FILES: filename = os.path.basename(filepath) new_filepath = os.path.join(_TF_WORKSPACE_ROOT, filename) symlink_force(filepath, new_filepath) def get_gcc_compiler(environ_cp): gcc_env = environ_cp.get('CXX') or environ_cp.get('CC') or shutil.which('gcc') if gcc_env is not None: gcc_version = run_shell([gcc_env, '--version']).split() if gcc_version[0] in ('gcc', 'g++'): return gcc_env return None def main(): global _TF_WORKSPACE_ROOT global _TF_BAZELRC global _TF_CURRENT_BAZEL_VERSION parser = argparse.ArgumentParser() parser.add_argument( '--workspace', type=str, default=os.path.abspath(os.path.dirname(__file__)), help='The absolute path to your active Bazel workspace.') args = parser.parse_args() _TF_WORKSPACE_ROOT = args.workspace _TF_BAZELRC = os.path.join(_TF_WORKSPACE_ROOT, _TF_BAZELRC_FILENAME) # Make a copy of os.environ to be clear when functions and getting and setting # environment variables. environ_cp = dict(os.environ) try: current_bazel_version = retrieve_bazel_version() except subprocess.CalledProcessError as e: print('Error retrieving bazel version: ', e.output.decode('UTF-8').strip()) raise e _TF_CURRENT_BAZEL_VERSION = convert_version_to_int(current_bazel_version) reset_tf_configure_bazelrc() cleanup_makefile() setup_python(environ_cp) if is_windows(): environ_cp['TF_NEED_OPENCL'] = '0' environ_cp['TF_CUDA_CLANG'] = '0' # TODO(ibiryukov): Investigate using clang as a cpu or cuda compiler on # Windows. environ_cp['TF_DOWNLOAD_CLANG'] = '0' environ_cp['TF_NEED_MPI'] = '0' if is_ppc64le(): # Enable MMA Dynamic Dispatch support if 'gcc' and if linker >= 2.35 gcc_env = get_gcc_compiler(environ_cp) if gcc_env is not None: # Use gold linker if 'gcc' and if 'ppc64le' write_to_bazelrc('build --linkopt="-fuse-ld=gold"') # Get the linker version ld_version = run_shell([gcc_env, '-Wl,-version']).split() ld_version_int = 0 for i in range(len(ld_version)): ld_version_int = convert_version_to_int(ld_version[i]) if ld_version_int is not None: break if ld_version_int is None: ld_version_int = 0 # Enable if 'ld' version >= 2.35 if ld_version_int >= 2035000: write_to_bazelrc( 'build --copt="-DEIGEN_ALTIVEC_ENABLE_MMA_DYNAMIC_DISPATCH=1"') with_xla_support = environ_cp.get('TF_ENABLE_XLA', None) if with_xla_support is not None: write_to_bazelrc('build --define=with_xla_support=%s' % ('true' if int(with_xla_support) else 'false')) set_action_env_var( environ_cp, 'TF_NEED_ROCM', 'ROCm', False, bazel_config_name='rocm') if (environ_cp.get('TF_NEED_ROCM') == '1' and 'LD_LIBRARY_PATH' in environ_cp and environ_cp.get('LD_LIBRARY_PATH') != '1'): write_action_env_to_bazelrc('LD_LIBRARY_PATH', environ_cp.get('LD_LIBRARY_PATH')) if (environ_cp.get('TF_NEED_ROCM') == '1' and environ_cp.get('ROCM_PATH')): write_action_env_to_bazelrc('ROCM_PATH', environ_cp.get('ROCM_PATH')) if (environ_cp.get('TF_NEED_ROCM') == '1' and environ_cp.get('HIP_PLATFORM')): write_action_env_to_bazelrc('HIP_PLATFORM', environ_cp.get('HIP_PLATFORM')) if is_windows(): print('\nWARNING: Cannot build with CUDA support on Windows.\n' 'Starting in TF 2.11, CUDA build is not supported for Windows. ' 'For using TensorFlow GPU on Windows, you will need to build/install ' 'TensorFlow in WSL2.\n') environ_cp['TF_NEED_CUDA'] = '0' else: environ_cp['TF_NEED_CUDA'] = str( int(get_var(environ_cp, 'TF_NEED_CUDA', 'CUDA', False))) if environ_cp.get('TF_NEED_CUDA') == '1': set_hermetic_cuda_version(environ_cp) set_hermetic_cudnn_version(environ_cp) set_hermetic_cuda_compute_capabilities(environ_cp) set_cuda_local_path(environ_cp, 'CUDA', 'LOCAL_CUDA_PATH') set_cuda_local_path(environ_cp, 'CUDNN', 'LOCAL_CUDNN_PATH') set_cuda_local_path(environ_cp, 'NCCL', 'LOCAL_NCCL_PATH') if 'LD_LIBRARY_PATH' in environ_cp and environ_cp.get( 'LD_LIBRARY_PATH') != '1': write_action_env_to_bazelrc('LD_LIBRARY_PATH', environ_cp.get('LD_LIBRARY_PATH')) set_tf_cuda_clang(environ_cp) if environ_cp.get('TF_CUDA_CLANG') == '1': # Set up which clang we should use as the cuda / host compiler. clang_cuda_compiler_path = set_clang_cuda_compiler_path(environ_cp) clang_version = retrieve_clang_version(clang_cuda_compiler_path) disable_clang_offsetof_extension(clang_version) else: # Set up which gcc nvcc should use as the host compiler # No need to set this on Windows if not is_windows(): set_gcc_host_compiler_path(environ_cp) set_other_cuda_vars(environ_cp) else: # CUDA not required. Ask whether we should use clang for the CPU build. if is_linux(): environ_cp['TF_NEED_CLANG'] = str(choose_compiler(environ_cp)) if environ_cp.get('TF_NEED_CLANG') == '1': clang_compiler_path = set_clang_compiler_path(environ_cp) clang_version = retrieve_clang_version(clang_compiler_path) disable_clang_offsetof_extension(clang_version) if is_windows(): environ_cp['TF_NEED_CLANG'] = str(choose_compiler_Win(environ_cp)) if environ_cp.get('TF_NEED_CLANG') == '1': clang_compiler_path = set_clang_compiler_path_win(environ_cp) clang_version = retrieve_clang_version(clang_compiler_path) disable_clang_offsetof_extension(clang_version) # ROCm / CUDA are mutually exclusive. # At most 1 GPU platform can be configured. gpu_platform_count = 0 if environ_cp.get('TF_NEED_ROCM') == '1': gpu_platform_count += 1 if environ_cp.get('TF_NEED_CUDA') == '1': gpu_platform_count += 1 if gpu_platform_count >= 2: raise UserInputError('CUDA / ROCm are mututally exclusive. ' 'At most 1 GPU platform can be configured.') set_cc_opt_flags(environ_cp) set_system_libs_flag(environ_cp) if is_windows(): set_windows_build_flags(environ_cp) if get_var(environ_cp, 'TF_SET_ANDROID_WORKSPACE', 'android workspace', False, ('Would you like to interactively configure ./WORKSPACE for ' 'Android builds?'), 'Searching for NDK and SDK installations.', 'Not configuring the WORKSPACE for Android builds.'): create_android_ndk_rule(environ_cp) create_android_sdk_rule(environ_cp) system_specific_test_config(environ_cp) configure_ios(environ_cp) print('Preconfigured Bazel build configs. You can use any of the below by ' 'adding "--config=<>" to your build command. See .bazelrc for more ' 'details.') config_info_line('mkl', 'Build with MKL support.') config_info_line( 'mkl_aarch64', 'Build with oneDNN and Compute Library for the Arm Architecture (ACL).') config_info_line('monolithic', 'Config for mostly static monolithic build.') config_info_line('numa', 'Build with NUMA support.') config_info_line( 'dynamic_kernels', '(Experimental) Build kernels into separate shared objects.') config_info_line('v1', 'Build with TensorFlow 1 API instead of TF 2 API.') print('Preconfigured Bazel build configs to DISABLE default on features:') config_info_line('nogcp', 'Disable GCP support.') config_info_line('nonccl', 'Disable NVIDIA NCCL support.') if __name__ == '__main__': main()
https://github.com/pytorch/pytorch
README.md
![PyTorch Logo](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/pytorch-logo-dark.png) -------------------------------------------------------------------------------- PyTorch is a Python package that provides two high-level features: - Tensor computation (like NumPy) with strong GPU acceleration - Deep neural networks built on a tape-based autograd system You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed. Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/main). <!-- toc --> - [More About PyTorch](#more-about-pytorch) - [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library) - [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd) - [Python First](#python-first) - [Imperative Experiences](#imperative-experiences) - [Fast and Lean](#fast-and-lean) - [Extensions Without Pain](#extensions-without-pain) - [Installation](#installation) - [Binaries](#binaries) - [NVIDIA Jetson Platforms](#nvidia-jetson-platforms) - [From Source](#from-source) - [Prerequisites](#prerequisites) - [NVIDIA CUDA Support](#nvidia-cuda-support) - [AMD ROCm Support](#amd-rocm-support) - [Intel GPU Support](#intel-gpu-support) - [Get the PyTorch Source](#get-the-pytorch-source) - [Install Dependencies](#install-dependencies) - [Install PyTorch](#install-pytorch) - [Adjust Build Options (Optional)](#adjust-build-options-optional) - [Docker Image](#docker-image) - [Using pre-built images](#using-pre-built-images) - [Building the image yourself](#building-the-image-yourself) - [Building the Documentation](#building-the-documentation) - [Building a PDF](#building-a-pdf) - [Previous Versions](#previous-versions) - [Getting Started](#getting-started) - [Resources](#resources) - [Communication](#communication) - [Releases and Contributing](#releases-and-contributing) - [The Team](#the-team) - [License](#license) <!-- tocstop --> ## More About PyTorch [Learn the basics of PyTorch](https://pytorch.org/tutorials/beginner/basics/intro.html) At a granular level, PyTorch is a library that consists of the following components: | Component | Description | | ---- | --- | | [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support | | [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch | | [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code | | [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility | | [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training | | [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience | Usually, PyTorch is used either as: - A replacement for NumPy to use the power of GPUs. - A deep learning research platform that provides maximum flexibility and speed. Elaborating Further: ### A GPU-Ready Tensor Library If you use NumPy, then you have used Tensors (a.k.a. ndarray). ![Tensor illustration](./docs/source/_static/img/tensor_illustration.png) PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the computation by a huge amount. We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, mathematical operations, linear algebra, reductions. And they are fast! ### Dynamic Neural Networks: Tape-Based Autograd PyTorch has a unique way of building neural networks: using and replaying a tape recorder. Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world. One has to build a neural network and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch. With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as [torch-autograd](https://github.com/twitter/torch-autograd), [autograd](https://github.com/HIPS/autograd), [Chainer](https://chainer.org), etc. While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date. You get the best of speed and flexibility for your crazy research. ![Dynamic graph](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/dynamic_graph.gif) ### Python First PyTorch is not a Python binding into a monolithic C++ framework. It is built to be deeply integrated into Python. You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc. You can write your new neural network layers in Python itself, using your favorite libraries and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/). Our goal is to not reinvent the wheel where appropriate. ### Imperative Experiences PyTorch is designed to be intuitive, linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. The stack trace points to exactly where your code was defined. We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines. ### Fast and Lean PyTorch has minimal framework overhead. We integrate acceleration libraries such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed. At the core, its CPU and GPU Tensor and neural network backends are mature and have been tested for years. Hence, PyTorch is quite fast — whether you run small or large neural networks. The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives. We've written custom memory allocators for the GPU to make sure that your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before. ### Extensions Without Pain Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward and with minimal abstractions. You can write new neural network layers in Python using the torch API [or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html). If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate. No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp). ## Installation ### Binaries Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/) #### NVIDIA Jetson Platforms Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them. ### From Source #### Prerequisites If you are installing from source, you will need: - Python 3.9 or later - A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required, on Linux) - Visual Studio or Visual Studio Build Tool (Windows only) \* PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise, Professional, or Community Editions. You can also install the build tools from https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not* come with Visual Studio Code by default. An example of environment setup is shown below: * Linux: ```bash $ source <CONDA_INSTALL_DIR>/bin/activate $ conda create -y -n <CONDA_NAME> $ conda activate <CONDA_NAME> ``` * Windows: ```bash $ source <CONDA_INSTALL_DIR>\Scripts\activate.bat $ conda create -y -n <CONDA_NAME> $ conda activate <CONDA_NAME> $ call "C:\Program Files\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvarsall.bat" x64 ``` ##### NVIDIA CUDA Support If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following: - [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) - [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above - [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/backend/latest/reference/support-matrix.html) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware If you want to disable CUDA support, export the environment variable `USE_CUDA=0`. Other potentially useful environment variables may be found in `setup.py`. If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/) ##### AMD ROCm Support If you want to compile with ROCm support, install - [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation - ROCm is currently supported only for Linux systems. By default the build system expects ROCm to be installed in `/opt/rocm`. If ROCm is installed in a different directory, the `ROCM_PATH` environment variable must be set to the ROCm installation directory. The build system automatically detects the AMD GPU architecture. Optionally, the AMD GPU architecture can be explicitly set with the `PYTORCH_ROCM_ARCH` environment variable [AMD GPU architecture](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html#supported-gpus) If you want to disable ROCm support, export the environment variable `USE_ROCM=0`. Other potentially useful environment variables may be found in `setup.py`. ##### Intel GPU Support If you want to compile with Intel GPU support, follow these - [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html) instructions. - Intel GPU is supported for Linux and Windows. If you want to disable Intel GPU support, export the environment variable `USE_XPU=0`. Other potentially useful environment variables may be found in `setup.py`. #### Get the PyTorch Source ```bash git clone https://github.com/pytorch/pytorch cd pytorch # if you are updating an existing checkout git submodule sync git submodule update --init --recursive ``` #### Install Dependencies **Common** ```bash conda install cmake ninja # Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below pip install -r requirements.txt ``` **On Linux** ```bash pip install mkl-static mkl-include # CUDA only: Add LAPACK support for the GPU if needed # magma installation: run with active conda environment. specify CUDA version to install .ci/docker/common/install_magma_conda.sh 12.4 # (optional) If using torch.compile with inductor/triton, install the matching version of triton # Run from the pytorch directory after cloning # For Intel GPU support, please explicitly `export USE_XPU=1` before running command. make triton ``` **On MacOS** ```bash # Add this package on intel x86 processor machines only pip install mkl-static mkl-include # Add these packages if torch.distributed is needed conda install pkg-config libuv ``` **On Windows** ```bash pip install mkl-static mkl-include # Add these packages if torch.distributed is needed. # Distributed package support on Windows is a prototype feature and is subject to changes. conda install -c conda-forge libuv=1.39 ``` #### Install PyTorch **On Linux** If you're compiling for AMD ROCm then first run this command: ```bash # Only run this if you're compiling for ROCm python tools/amd_build/build_amd.py ``` Install PyTorch ```bash export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}" python setup.py develop ``` **On macOS** ```bash python3 setup.py develop ``` **On Windows** If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda) **CPU-only builds** In this mode PyTorch computations will run on your CPU, not your GPU. ```cmd python setup.py develop ``` Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used. **CUDA based build** In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching [NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA. NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox. Make sure that CUDA with Nsight Compute is installed after Visual Studio. Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019. <br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain. Additional libraries such as [Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a. MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/main/.ci/pytorch/win-test-helpers/installation-helpers) to install them. You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations ```cmd cmd :: Set the environment variables after you have downloaded and unzipped the mkl package, :: else CMake would throw an error as `Could NOT find OpenMP`. set CMAKE_INCLUDE_PATH={Your directory}\mkl\include set LIB={Your directory}\mkl\lib;%LIB% :: Read the content in the previous section carefully before you proceed. :: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block. :: "Visual Studio 2019 Developer Command Prompt" will be run automatically. :: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator. set CMAKE_GENERATOR_TOOLSET_VERSION=14.27 set DISTUTILS_USE_SDK=1 for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION% :: [Optional] If you want to override the CUDA host compiler set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe python setup.py develop ``` **Intel GPU builds** In this mode PyTorch with Intel GPU support will be built. Please make sure [the common prerequisites](#prerequisites) as well as [the prerequisites for Intel GPU](#intel-gpu-support) are properly installed and the environment variables are configured prior to starting the build. For build tool support, `Visual Studio 2022` is required. Then PyTorch can be built with the command: ```cmd :: CMD Commands: :: Set the CMAKE_PREFIX_PATH to help find corresponding packages :: %CONDA_PREFIX% only works after `conda activate custom_env` if defined CMAKE_PREFIX_PATH ( set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library;%CMAKE_PREFIX_PATH%" ) else ( set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library" ) python setup.py develop ``` ##### Adjust Build Options (Optional) You can adjust the configuration of cmake variables optionally (without building first), by doing the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done with such a step. On Linux ```bash export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}" python setup.py build --cmake-only ccmake build # or cmake-gui build ``` On macOS ```bash export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}" MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only ccmake build # or cmake-gui build ``` ### Docker Image #### Using pre-built images You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+ ```bash docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest ``` Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`. #### Building the image yourself **NOTE:** Must be built with a docker version > 18.06 The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8. You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it unset to use the default. ```bash make -f docker.Makefile # images are tagged as docker.io/${your_docker_username}/pytorch ``` You can also pass the `CMAKE_VARS="..."` environment variable to specify additional CMake variables to be passed to CMake during the build. See [setup.py](./setup.py) for the list of available variables. ```bash make -f docker.Makefile ``` ### Building the Documentation To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org) and the pytorch_sphinx_theme2. Before you build the documentation locally, ensure `torch` is installed in your environment. For small fixes, you can install the nightly version as described in [Getting Started](https://pytorch.org/get-started/locally/). For more complex fixes, such as adding a new module and docstrings for the new module, you might need to install torch [from source](#from-source). See [Docstring Guidelines](https://github.com/pytorch/pytorch/wiki/Docstring-Guidelines) for docstring conventions. ```bash cd docs/ pip install -r requirements.txt make html make serve ``` Run `make` to get a list of all available output formats. If you get a katex error run `npm install katex`. If it persists, try `npm install -g katex` > [!NOTE] > If you installed `nodejs` with a different package manager (e.g., > `conda`) then `npm` will probably install a version of `katex` that is not > compatible with your version of `nodejs` and doc builds will fail. > A combination of versions that is known to work is `[email protected]` and > `[email protected]`. To install the latter with `npm` you can run > ```npm install -g [email protected]``` > [!NOTE] > If you see a numpy incompatibility error, run: > ``` > pip install 'numpy<2' > ``` When you make changes to the dependencies run by CI, edit the `.ci/docker/requirements-docs.txt` file. #### Building a PDF To compile a PDF of all PyTorch documentation, ensure you have `texlive` and LaTeX installed. On macOS, you can install them using: ``` brew install --cask mactex ``` To create the PDF: 1. Run: ``` make latexpdf ``` This will generate the necessary files in the `build/latex` directory. 2. Navigate to this directory and execute: ``` make LATEXOPTS="-interaction=nonstopmode" ``` This will produce a `pytorch.pdf` with the desired content. Run this command one more time so that it generates the correct table of contents and index. > [!NOTE] > To view the Table of Contents, switch to the **Table of Contents** > view in your PDF viewer. ### Previous Versions Installation instructions and binaries for previous PyTorch versions may be found on [our website](https://pytorch.org/get-started/previous-versions). ## Getting Started Three-pointers to get you started: - [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/) - [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples) - [The API Reference](https://pytorch.org/docs/) - [Glossary](https://github.com/pytorch/pytorch/blob/main/GLOSSARY.md) ## Resources * [PyTorch.org](https://pytorch.org/) * [PyTorch Tutorials](https://pytorch.org/tutorials/) * [PyTorch Examples](https://github.com/pytorch/examples) * [PyTorch Models](https://pytorch.org/hub/) * [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188) * [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229) * [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch) * [PyTorch Twitter](https://twitter.com/PyTorch) * [PyTorch Blog](https://pytorch.org/blog/) * [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw) ## Communication * Forums: Discuss implementations, research, etc. https://discuss.pytorch.org * GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc. * Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1 * Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv * Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch * For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/) ## Releases and Contributing Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues). We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of. To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md). For more information about PyTorch releases, see [Release page](RELEASE.md). ## The Team PyTorch is a community-driven project with several skillful engineers and researchers contributing to it. PyTorch is currently maintained by [Soumith Chintala](http://soumith.ch), [Gregory Chanan](https://github.com/gchanan), [Dmytro Dzhulgakov](https://github.com/dzhulgakov), [Edward Yang](https://github.com/ezyang), and [Nikita Shulga](https://github.com/malfet) with major contributions coming from hundreds of talented individuals in various forms and means. A non-exhaustive but growing list needs to mention: [Trevor Killeen](https://github.com/killeent), [Sasank Chilamkurthy](https://github.com/chsasank), [Sergey Zagoruyko](https://github.com/szagoruyko), [Adam Lerer](https://github.com/adamlerer), [Francisco Massa](https://github.com/fmassa), [Alykhan Tejani](https://github.com/alykhantejani), [Luca Antiga](https://github.com/lantiga), [Alban Desmaison](https://github.com/albanD), [Andreas Koepf](https://github.com/andreaskoepf), [James Bradbury](https://github.com/jekbradbury), [Zeming Lin](https://github.com/ebetica), [Yuandong Tian](https://github.com/yuandong-tian), [Guillaume Lample](https://github.com/glample), [Marat Dukhan](https://github.com/Maratyszcza), [Natalia Gimelshein](https://github.com/ngimel), [Christian Sarofeen](https://github.com/csarofeen), [Martin Raison](https://github.com/martinraison), [Edward Yang](https://github.com/ezyang), [Zachary Devito](https://github.com/zdevito). Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch. ## License PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.
https://github.com/pytorch/pytorch
setup.py
# Welcome to the PyTorch setup.py. # Environment variables you are probably interested in: # # DEBUG # build with -O0 and -g (debug symbols) # # REL_WITH_DEB_INFO # build with optimizations and -g (debug symbols) # # USE_CUSTOM_DEBINFO="path/to/file1.cpp;path/to/file2.cpp" # build with debug info only for specified files # # MAX_JOBS # maximum number of compile jobs we should use to compile your code # # USE_CUDA=0 # disables CUDA build # # CFLAGS # flags to apply to both C and C++ files to be compiled (a quirk of setup.py # which we have faithfully adhered to in our build system is that CFLAGS # also applies to C++ files (unless CXXFLAGS is set), in contrast to the # default behavior of autogoo and cmake build systems.) # # A specific flag that can be used is # -DHAS_TORCH_SHOW_DISPATCH_TRACE # build with dispatch trace that can be enabled with # TORCH_SHOW_DISPATCH_TRACE=1 at runtime. # # CC # the C/C++ compiler to use # # Environment variables for feature toggles: # # DEBUG_CUDA=1 # if used in conjunction with DEBUG or REL_WITH_DEB_INFO, will also # build CUDA kernels with -lineinfo --source-in-ptx. Note that # on CUDA 12 this may cause nvcc to OOM, so this is disabled by default. # USE_CUDNN=0 # disables the cuDNN build # # USE_CUSPARSELT=0 # disables the cuSPARSELt build # # USE_CUDSS=0 # disables the cuDSS build # # USE_CUFILE=0 # disables the cuFile build # # USE_FBGEMM=0 # disables the FBGEMM build # # USE_KINETO=0 # disables usage of libkineto library for profiling # # USE_NUMPY=0 # disables the NumPy build # # BUILD_TEST=0 # disables the test build # # USE_MKLDNN=0 # disables use of MKLDNN # # USE_MKLDNN_ACL # enables use of Compute Library backend for MKLDNN on Arm; # USE_MKLDNN must be explicitly enabled. # # MKLDNN_CPU_RUNTIME # MKL-DNN threading mode: TBB or OMP (default) # # USE_STATIC_MKL # Prefer to link with MKL statically - Unix only # USE_ITT=0 # disable use of Intel(R) VTune Profiler's ITT functionality # # USE_NNPACK=0 # disables NNPACK build # # USE_DISTRIBUTED=0 # disables distributed (c10d, gloo, mpi, etc.) build # # USE_TENSORPIPE=0 # disables distributed Tensorpipe backend build # # USE_GLOO=0 # disables distributed gloo backend build # # USE_MPI=0 # disables distributed MPI backend build # # USE_SYSTEM_NCCL=0 # disables use of system-wide nccl (we will use our submoduled # copy in third_party/nccl) # # USE_OPENMP=0 # disables use of OpenMP for parallelization # # USE_FLASH_ATTENTION=0 # disables building flash attention for scaled dot product attention # # USE_MEM_EFF_ATTENTION=0 # disables building memory efficient attention for scaled dot product attention # # BUILD_BINARY # enables the additional binaries/ build # # ATEN_AVX512_256=TRUE # ATen AVX2 kernels can use 32 ymm registers, instead of the default 16. # This option can be used if AVX512 doesn't perform well on a machine. # The FBGEMM library also uses AVX512_256 kernels on Xeon D processors, # but it also has some (optimized) assembly code. # # PYTORCH_BUILD_VERSION # PYTORCH_BUILD_NUMBER # specify the version of PyTorch, rather than the hard-coded version # in this file; used when we're building binaries for distribution # # TORCH_CUDA_ARCH_LIST # specify which CUDA architectures to build for. # ie `TORCH_CUDA_ARCH_LIST="6.0;7.0"` # These are not CUDA versions, instead, they specify what # classes of NVIDIA hardware we should generate PTX for. # # TORCH_XPU_ARCH_LIST # specify which XPU architectures to build for. # ie `TORCH_XPU_ARCH_LIST="ats-m150,lnl-m"` # # PYTORCH_ROCM_ARCH # specify which AMD GPU targets to build for. # ie `PYTORCH_ROCM_ARCH="gfx900;gfx906"` # # ONNX_NAMESPACE # specify a namespace for ONNX built here rather than the hard-coded # one in this file; needed to build with other frameworks that share ONNX. # # BLAS # BLAS to be used by Caffe2. Can be MKL, Eigen, ATLAS, FlexiBLAS, or OpenBLAS. If set # then the build will fail if the requested BLAS is not found, otherwise # the BLAS will be chosen based on what is found on your system. # # MKL_THREADING # MKL threading mode: SEQ, TBB or OMP (default) # # USE_ROCM_KERNEL_ASSERT=1 # Enable kernel assert in ROCm platform # # Environment variables we respect (these environment variables are # conventional and are often understood/set by other software.) # # CUDA_HOME (Linux/OS X) # CUDA_PATH (Windows) # specify where CUDA is installed; usually /usr/local/cuda or # /usr/local/cuda-x.y # CUDAHOSTCXX # specify a different compiler than the system one to use as the CUDA # host compiler for nvcc. # # CUDA_NVCC_EXECUTABLE # Specify a NVCC to use. This is used in our CI to point to a cached nvcc # # CUDNN_LIB_DIR # CUDNN_INCLUDE_DIR # CUDNN_LIBRARY # specify where cuDNN is installed # # MIOPEN_LIB_DIR # MIOPEN_INCLUDE_DIR # MIOPEN_LIBRARY # specify where MIOpen is installed # # NCCL_ROOT # NCCL_LIB_DIR # NCCL_INCLUDE_DIR # specify where nccl is installed # # ACL_ROOT_DIR # specify where Compute Library is installed # # LIBRARY_PATH # LD_LIBRARY_PATH # we will search for libraries in these paths # # ATEN_THREADING # ATen parallel backend to use for intra- and inter-op parallelism # possible values: # OMP - use OpenMP for intra-op and native backend for inter-op tasks # NATIVE - use native thread pool for both intra- and inter-op tasks # # USE_SYSTEM_LIBS (work in progress) # Use system-provided libraries to satisfy the build dependencies. # When turned on, the following cmake variables will be toggled as well: # USE_SYSTEM_CPUINFO=ON # USE_SYSTEM_SLEEF=ON # USE_SYSTEM_GLOO=ON # BUILD_CUSTOM_PROTOBUF=OFF # USE_SYSTEM_EIGEN_INSTALL=ON # USE_SYSTEM_FP16=ON # USE_SYSTEM_PTHREADPOOL=ON # USE_SYSTEM_PSIMD=ON # USE_SYSTEM_FXDIV=ON # USE_SYSTEM_BENCHMARK=ON # USE_SYSTEM_ONNX=ON # USE_SYSTEM_XNNPACK=ON # USE_SYSTEM_PYBIND11=ON # USE_SYSTEM_NCCL=ON # USE_SYSTEM_NVTX=ON # # USE_MIMALLOC # Static link mimalloc into C10, and use mimalloc in alloc_cpu & alloc_free. # By default, It is only enabled on Windows. # # USE_PRIORITIZED_TEXT_FOR_LD # Uses prioritized text form cmake/prioritized_text.txt for LD # # BUILD_LIBTORCH_WHL # Builds libtorch.so and its dependencies as a wheel # # BUILD_PYTHON_ONLY # Builds pytorch as a wheel using libtorch.so from a separate wheel import os import sys if sys.platform == "win32" and sys.maxsize.bit_length() == 31: print( "32-bit Windows Python runtime is not supported. Please switch to 64-bit Python." ) sys.exit(-1) import platform BUILD_LIBTORCH_WHL = os.getenv("BUILD_LIBTORCH_WHL", "0") == "1" BUILD_PYTHON_ONLY = os.getenv("BUILD_PYTHON_ONLY", "0") == "1" python_min_version = (3, 9, 0) python_min_version_str = ".".join(map(str, python_min_version)) if sys.version_info < python_min_version: print( f"You are using Python {platform.python_version()}. Python >={python_min_version_str} is required." ) sys.exit(-1) import filecmp import glob import importlib import importlib.util import json import shutil import subprocess import sysconfig import time from collections import defaultdict import setuptools.command.build_ext import setuptools.command.install import setuptools.command.sdist from setuptools import Extension, find_packages, setup from setuptools.dist import Distribution from tools.build_pytorch_libs import build_pytorch from tools.generate_torch_version import get_torch_version from tools.setup_helpers.cmake import CMake from tools.setup_helpers.env import build_type, IS_DARWIN, IS_LINUX, IS_WINDOWS from tools.setup_helpers.generate_linker_script import gen_linker_script def _get_package_path(package_name): spec = importlib.util.find_spec(package_name) if spec: # The package might be a namespace package, so get_data may fail try: loader = spec.loader if loader is not None: file_path = loader.get_filename() # type: ignore[attr-defined] return os.path.dirname(file_path) except AttributeError: pass return None # set up appropriate env variables if BUILD_LIBTORCH_WHL: # Set up environment variables for ONLY building libtorch.so and not libtorch_python.so # functorch is not supported without python os.environ["BUILD_FUNCTORCH"] = "OFF" if BUILD_PYTHON_ONLY: os.environ["BUILD_LIBTORCHLESS"] = "ON" os.environ["LIBTORCH_LIB_PATH"] = f"{_get_package_path('torch')}/lib" ################################################################################ # Parameters parsed from environment ################################################################################ VERBOSE_SCRIPT = True RUN_BUILD_DEPS = True # see if the user passed a quiet flag to setup.py arguments and respect # that in our parts of the build EMIT_BUILD_WARNING = False RERUN_CMAKE = False CMAKE_ONLY = False filtered_args = [] for i, arg in enumerate(sys.argv): if arg == "--cmake": RERUN_CMAKE = True continue if arg == "--cmake-only": # Stop once cmake terminates. Leave users a chance to adjust build # options. CMAKE_ONLY = True continue if arg == "rebuild" or arg == "build": arg = "build" # rebuild is gone, make it build EMIT_BUILD_WARNING = True if arg == "--": filtered_args += sys.argv[i:] break if arg == "-q" or arg == "--quiet": VERBOSE_SCRIPT = False if arg in ["clean", "egg_info", "sdist"]: RUN_BUILD_DEPS = False filtered_args.append(arg) sys.argv = filtered_args if VERBOSE_SCRIPT: def report(*args): print(*args) else: def report(*args): pass # Make distutils respect --quiet too setuptools.distutils.log.warn = report # Constant known variables used throughout this file cwd = os.path.dirname(os.path.abspath(__file__)) lib_path = os.path.join(cwd, "torch", "lib") third_party_path = os.path.join(cwd, "third_party") # CMAKE: full path to python library if IS_WINDOWS: cmake_python_library = "{}/libs/python{}.lib".format( sysconfig.get_config_var("prefix"), sysconfig.get_config_var("VERSION") ) # Fix virtualenv builds if not os.path.exists(cmake_python_library): cmake_python_library = "{}/libs/python{}.lib".format( sys.base_prefix, sysconfig.get_config_var("VERSION") ) else: cmake_python_library = "{}/{}".format( sysconfig.get_config_var("LIBDIR"), sysconfig.get_config_var("INSTSONAME") ) cmake_python_include_dir = sysconfig.get_path("include") ################################################################################ # Version, create_version_file, and package_name ################################################################################ package_name = os.getenv("TORCH_PACKAGE_NAME", "torch") LIBTORCH_PKG_NAME = os.getenv("LIBTORCH_PACKAGE_NAME", "torch_no_python") if BUILD_LIBTORCH_WHL: package_name = LIBTORCH_PKG_NAME package_type = os.getenv("PACKAGE_TYPE", "wheel") version = get_torch_version() report(f"Building wheel {package_name}-{version}") cmake = CMake() def get_submodule_folders(): git_modules_path = os.path.join(cwd, ".gitmodules") default_modules_path = [ os.path.join(third_party_path, name) for name in [ "gloo", "cpuinfo", "onnx", "fbgemm", "cutlass", ] ] if not os.path.exists(git_modules_path): return default_modules_path with open(git_modules_path) as f: return [ os.path.join(cwd, line.split("=", 1)[1].strip()) for line in f if line.strip().startswith("path") ] def check_submodules(): def check_for_files(folder, files): if not any(os.path.exists(os.path.join(folder, f)) for f in files): report("Could not find any of {} in {}".format(", ".join(files), folder)) report("Did you run 'git submodule update --init --recursive'?") sys.exit(1) def not_exists_or_empty(folder): return not os.path.exists(folder) or ( os.path.isdir(folder) and len(os.listdir(folder)) == 0 ) if bool(os.getenv("USE_SYSTEM_LIBS", False)): return folders = get_submodule_folders() # If none of the submodule folders exists, try to initialize them if all(not_exists_or_empty(folder) for folder in folders): try: report(" --- Trying to initialize submodules") start = time.time() subprocess.check_call( ["git", "submodule", "update", "--init", "--recursive"], cwd=cwd ) end = time.time() report(f" --- Submodule initialization took {end - start:.2f} sec") except Exception: report(" --- Submodule initalization failed") report("Please run:\n\tgit submodule update --init --recursive") sys.exit(1) for folder in folders: check_for_files( folder, [ "CMakeLists.txt", "Makefile", "setup.py", "LICENSE", "LICENSE.md", "LICENSE.txt", ], ) check_for_files( os.path.join(third_party_path, "fbgemm", "external", "asmjit"), ["CMakeLists.txt"], ) # Windows has very bad support for symbolic links. # Instead of using symlinks, we're going to copy files over def mirror_files_into_torchgen(): # (new_path, orig_path) # Directories are OK and are recursively mirrored. paths = [ ( "torchgen/packaged/ATen/native/native_functions.yaml", "aten/src/ATen/native/native_functions.yaml", ), ("torchgen/packaged/ATen/native/tags.yaml", "aten/src/ATen/native/tags.yaml"), ("torchgen/packaged/ATen/templates", "aten/src/ATen/templates"), ("torchgen/packaged/autograd", "tools/autograd"), ("torchgen/packaged/autograd/templates", "tools/autograd/templates"), ] for new_path, orig_path in paths: # Create the dirs involved in new_path if they don't exist if not os.path.exists(new_path): os.makedirs(os.path.dirname(new_path), exist_ok=True) # Copy the files from the orig location to the new location if os.path.isfile(orig_path): shutil.copyfile(orig_path, new_path) continue if os.path.isdir(orig_path): if os.path.exists(new_path): # copytree fails if the tree exists already, so remove it. shutil.rmtree(new_path) shutil.copytree(orig_path, new_path) continue raise RuntimeError("Check the file paths in `mirror_files_into_torchgen()`") # all the work we need to do _before_ setup runs def build_deps(): report("-- Building version " + version) check_submodules() check_pydep("yaml", "pyyaml") build_python = not BUILD_LIBTORCH_WHL build_pytorch( version=version, cmake_python_library=cmake_python_library, build_python=build_python, rerun_cmake=RERUN_CMAKE, cmake_only=CMAKE_ONLY, cmake=cmake, ) if CMAKE_ONLY: report( 'Finished running cmake. Run "ccmake build" or ' '"cmake-gui build" to adjust build options and ' '"python setup.py install" to build.' ) sys.exit() # Use copies instead of symbolic files. # Windows has very poor support for them. sym_files = [ "tools/shared/_utils_internal.py", "torch/utils/benchmark/utils/valgrind_wrapper/callgrind.h", "torch/utils/benchmark/utils/valgrind_wrapper/valgrind.h", ] orig_files = [ "torch/_utils_internal.py", "third_party/valgrind-headers/callgrind.h", "third_party/valgrind-headers/valgrind.h", ] for sym_file, orig_file in zip(sym_files, orig_files): same = False if os.path.exists(sym_file): if filecmp.cmp(sym_file, orig_file): same = True else: os.remove(sym_file) if not same: shutil.copyfile(orig_file, sym_file) ################################################################################ # Building dependent libraries ################################################################################ missing_pydep = """ Missing build dependency: Unable to `import {importname}`. Please install it via `conda install {module}` or `pip install {module}` """.strip() def check_pydep(importname, module): try: importlib.import_module(importname) except ImportError as e: raise RuntimeError( missing_pydep.format(importname=importname, module=module) ) from e class build_ext(setuptools.command.build_ext.build_ext): def _embed_libomp(self): # Copy libiomp5.dylib/libomp.dylib inside the wheel package on MacOS lib_dir = os.path.join(self.build_lib, "torch", "lib") libtorch_cpu_path = os.path.join(lib_dir, "libtorch_cpu.dylib") if not os.path.exists(libtorch_cpu_path): return # Parse libtorch_cpu load commands otool_cmds = ( subprocess.check_output(["otool", "-l", libtorch_cpu_path]) .decode("utf-8") .split("\n") ) rpaths, libs = [], [] for idx, line in enumerate(otool_cmds): if line.strip() == "cmd LC_LOAD_DYLIB": lib_name = otool_cmds[idx + 2].strip() assert lib_name.startswith("name ") libs.append(lib_name.split(" ", 1)[1].rsplit("(", 1)[0][:-1]) if line.strip() == "cmd LC_RPATH": rpath = otool_cmds[idx + 2].strip() assert rpath.startswith("path ") rpaths.append(rpath.split(" ", 1)[1].rsplit("(", 1)[0][:-1]) omplib_path = get_cmake_cache_vars()["OpenMP_libomp_LIBRARY"] omplib_name = get_cmake_cache_vars()["OpenMP_C_LIB_NAMES"] + ".dylib" omplib_rpath_path = os.path.join("@rpath", omplib_name) # This logic is fragile and checks only two cases: # - libtorch_cpu depends on `@rpath/libomp.dylib`e (happens when built inside miniconda environment) # - libtorch_cpu depends on `/abs/path/to/libomp.dylib` (happens when built with libomp from homebrew) if not any(c in libs for c in [omplib_path, omplib_rpath_path]): return # Copy libomp/libiomp5 from rpath locations target_lib = os.path.join(self.build_lib, "torch", "lib", omplib_name) libomp_relocated = False for rpath in rpaths: source_lib = os.path.join(rpath, omplib_name) if not os.path.exists(source_lib): continue self.copy_file(source_lib, target_lib) # Delete old rpath and add @loader_lib to the rpath # This should prevent delocate from attempting to package another instance # of OpenMP library in torch wheel as well as loading two libomp.dylib into # the address space, as libraries are cached by their unresolved names install_name_tool_args = [ "-rpath", rpath, "@loader_path", ] libomp_relocated = True break if not libomp_relocated and os.path.exists(omplib_path): self.copy_file(omplib_path, target_lib) install_name_tool_args = [ "-change", omplib_path, omplib_rpath_path, ] if "@loader_path" not in rpaths: install_name_tool_args += [ "-add_rpath", "@loader_path", ] libomp_relocated = True if libomp_relocated: install_name_tool_args.insert(0, "install_name_tool") install_name_tool_args.append(libtorch_cpu_path) subprocess.check_call(install_name_tool_args) # Copy omp.h from OpenMP_C_FLAGS and copy it into include folder omp_cflags = get_cmake_cache_vars()["OpenMP_C_FLAGS"] if not omp_cflags: return for include_dir in [f[2:] for f in omp_cflags.split(" ") if f.startswith("-I")]: omp_h = os.path.join(include_dir, "omp.h") if not os.path.exists(omp_h): continue target_omp_h = os.path.join(self.build_lib, "torch", "include", "omp.h") self.copy_file(omp_h, target_omp_h) break def run(self): # Report build options. This is run after the build completes so # `CMakeCache.txt` exists and we can get an # accurate report on what is used and what is not. cmake_cache_vars = defaultdict(lambda: False, cmake.get_cmake_cache_variables()) if cmake_cache_vars["USE_NUMPY"]: report("-- Building with NumPy bindings") else: report("-- NumPy not found") if cmake_cache_vars["USE_CUDNN"]: report( "-- Detected cuDNN at " + cmake_cache_vars["CUDNN_LIBRARY"] + ", " + cmake_cache_vars["CUDNN_INCLUDE_DIR"] ) else: report("-- Not using cuDNN") if cmake_cache_vars["USE_CUDA"]: report("-- Detected CUDA at " + cmake_cache_vars["CUDA_TOOLKIT_ROOT_DIR"]) else: report("-- Not using CUDA") if cmake_cache_vars["USE_XPU"]: report("-- Detected XPU runtime at " + cmake_cache_vars["SYCL_LIBRARY_DIR"]) else: report("-- Not using XPU") if cmake_cache_vars["USE_MKLDNN"]: report("-- Using MKLDNN") if cmake_cache_vars["USE_MKLDNN_ACL"]: report("-- Using Compute Library for the Arm architecture with MKLDNN") else: report( "-- Not using Compute Library for the Arm architecture with MKLDNN" ) if cmake_cache_vars["USE_MKLDNN_CBLAS"]: report("-- Using CBLAS in MKLDNN") else: report("-- Not using CBLAS in MKLDNN") else: report("-- Not using MKLDNN") if cmake_cache_vars["USE_NCCL"] and cmake_cache_vars["USE_SYSTEM_NCCL"]: report( "-- Using system provided NCCL library at {}, {}".format( cmake_cache_vars["NCCL_LIBRARIES"], cmake_cache_vars["NCCL_INCLUDE_DIRS"], ) ) elif cmake_cache_vars["USE_NCCL"]: report("-- Building NCCL library") else: report("-- Not using NCCL") if cmake_cache_vars["USE_DISTRIBUTED"]: if IS_WINDOWS: report("-- Building without distributed package") else: report("-- Building with distributed package: ") report( " -- USE_TENSORPIPE={}".format(cmake_cache_vars["USE_TENSORPIPE"]) ) report(" -- USE_GLOO={}".format(cmake_cache_vars["USE_GLOO"])) report(" -- USE_MPI={}".format(cmake_cache_vars["USE_OPENMPI"])) else: report("-- Building without distributed package") if cmake_cache_vars["STATIC_DISPATCH_BACKEND"]: report( "-- Using static dispatch with backend {}".format( cmake_cache_vars["STATIC_DISPATCH_BACKEND"] ) ) if cmake_cache_vars["USE_LIGHTWEIGHT_DISPATCH"]: report("-- Using lightweight dispatch") if cmake_cache_vars["BUILD_EXECUTORCH"]: report("-- Building Executorch") if cmake_cache_vars["USE_ITT"]: report("-- Using ITT") else: report("-- Not using ITT") # Do not use clang to compile extensions if `-fstack-clash-protection` is defined # in system CFLAGS c_flags = str(os.getenv("CFLAGS", "")) if ( IS_LINUX and "-fstack-clash-protection" in c_flags and "clang" in os.environ.get("CC", "") ): os.environ["CC"] = str(os.environ["CC"]) # It's an old-style class in Python 2.7... setuptools.command.build_ext.build_ext.run(self) if IS_DARWIN: self._embed_libomp() # Copy the essential export library to compile C++ extensions. if IS_WINDOWS: build_temp = self.build_temp ext_filename = self.get_ext_filename("_C") lib_filename = ".".join(ext_filename.split(".")[:-1]) + ".lib" export_lib = os.path.join( build_temp, "torch", "csrc", lib_filename ).replace("\\", "/") build_lib = self.build_lib target_lib = os.path.join(build_lib, "torch", "lib", "_C.lib").replace( "\\", "/" ) # Create "torch/lib" directory if not exists. # (It is not created yet in "develop" mode.) target_dir = os.path.dirname(target_lib) if not os.path.exists(target_dir): os.makedirs(target_dir) self.copy_file(export_lib, target_lib) # In ROCm on Windows case copy rocblas and hipblaslt files into # torch/lib/rocblas/library and torch/lib/hipblaslt/library use_rocm = os.environ.get("USE_ROCM") if use_rocm: rocm_dir_path = os.environ.get("ROCM_DIR") rocm_bin_path = os.path.join(rocm_dir_path, "bin") rocblas_dir = os.path.join(rocm_bin_path, "rocblas") target_rocblas_dir = os.path.join(target_dir, "rocblas") os.makedirs(target_rocblas_dir, exist_ok=True) self.copy_tree(rocblas_dir, target_rocblas_dir) hipblaslt_dir = os.path.join(rocm_bin_path, "hipblaslt") target_hipblaslt_dir = os.path.join(target_dir, "hipblaslt") os.makedirs(target_hipblaslt_dir, exist_ok=True) self.copy_tree(hipblaslt_dir, target_hipblaslt_dir) else: report("The specified environment variable does not exist.") def build_extensions(self): self.create_compile_commands() # Copy functorch extension for i, ext in enumerate(self.extensions): if ext.name != "functorch._C": continue fullname = self.get_ext_fullname(ext.name) filename = self.get_ext_filename(fullname) fileext = os.path.splitext(filename)[1] src = os.path.join(os.path.dirname(filename), "functorch" + fileext) dst = os.path.join(os.path.realpath(self.build_lib), filename) if os.path.exists(src): report(f"Copying {ext.name} from {src} to {dst}") dst_dir = os.path.dirname(dst) if not os.path.exists(dst_dir): os.makedirs(dst_dir) self.copy_file(src, dst) setuptools.command.build_ext.build_ext.build_extensions(self) def get_outputs(self): outputs = setuptools.command.build_ext.build_ext.get_outputs(self) outputs.append(os.path.join(self.build_lib, "caffe2")) report(f"setup.py::get_outputs returning {outputs}") return outputs def create_compile_commands(self): def load(filename): with open(filename) as f: return json.load(f) ninja_files = glob.glob("build/*compile_commands.json") cmake_files = glob.glob("torch/lib/build/*/compile_commands.json") all_commands = [entry for f in ninja_files + cmake_files for entry in load(f)] # cquery does not like c++ compiles that start with gcc. # It forgets to include the c++ header directories. # We can work around this by replacing the gcc calls that python # setup.py generates with g++ calls instead for command in all_commands: if command["command"].startswith("gcc "): command["command"] = "g++ " + command["command"][4:] new_contents = json.dumps(all_commands, indent=2) contents = "" if os.path.exists("compile_commands.json"): with open("compile_commands.json") as f: contents = f.read() if contents != new_contents: with open("compile_commands.json", "w") as f: f.write(new_contents) class concat_license_files: """Merge LICENSE and LICENSES_BUNDLED.txt as a context manager LICENSE is the main PyTorch license, LICENSES_BUNDLED.txt is auto-generated from all the licenses found in ./third_party/. We concatenate them so there is a single license file in the sdist and wheels with all of the necessary licensing info. """ def __init__(self, include_files=False): self.f1 = "LICENSE" self.f2 = "third_party/LICENSES_BUNDLED.txt" self.include_files = include_files def __enter__(self): """Concatenate files""" old_path = sys.path sys.path.append(third_party_path) try: from build_bundled import create_bundled finally: sys.path = old_path with open(self.f1) as f1: self.bsd_text = f1.read() with open(self.f1, "a") as f1: f1.write("\n\n") create_bundled( os.path.relpath(third_party_path), f1, include_files=self.include_files ) def __exit__(self, exception_type, exception_value, traceback): """Restore content of f1""" with open(self.f1, "w") as f: f.write(self.bsd_text) try: from wheel.bdist_wheel import bdist_wheel except ImportError: # This is useful when wheel is not installed and bdist_wheel is not # specified on the command line. If it _is_ specified, parsing the command # line will fail before wheel_concatenate is needed wheel_concatenate = None else: # Need to create the proper LICENSE.txt for the wheel class wheel_concatenate(bdist_wheel): """check submodules on sdist to prevent incomplete tarballs""" def run(self): with concat_license_files(include_files=True): super().run() def write_wheelfile(self, *args, **kwargs): super().write_wheelfile(*args, **kwargs) if BUILD_LIBTORCH_WHL: # Remove extraneneous files in the libtorch wheel for root, dirs, files in os.walk(self.bdist_dir): for file in files: if file.endswith((".a", ".so")) and os.path.isfile( os.path.join(self.bdist_dir, file) ): os.remove(os.path.join(root, file)) elif file.endswith(".py"): os.remove(os.path.join(root, file)) # need an __init__.py file otherwise we wouldn't have a package open(os.path.join(self.bdist_dir, "torch", "__init__.py"), "w").close() class install(setuptools.command.install.install): def run(self): super().run() class clean(setuptools.Command): user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): import glob import re with open(".gitignore") as f: ignores = f.read() pat = re.compile(r"^#( BEGIN NOT-CLEAN-FILES )?") for wildcard in filter(None, ignores.split("\n")): match = pat.match(wildcard) if match: if match.group(1): # Marker is found and stop reading .gitignore. break # Ignore lines which begin with '#'. else: # Don't remove absolute paths from the system wildcard = wildcard.lstrip("./") for filename in glob.glob(wildcard): try: os.remove(filename) except OSError: shutil.rmtree(filename, ignore_errors=True) class sdist(setuptools.command.sdist.sdist): def run(self): with concat_license_files(): super().run() def get_cmake_cache_vars(): try: return defaultdict(lambda: False, cmake.get_cmake_cache_variables()) except FileNotFoundError: # CMakeCache.txt does not exist. Probably running "python setup.py clean" over a clean directory. return defaultdict(lambda: False) def configure_extension_build(): r"""Configures extension build options according to system environment and user's choice. Returns: The input to parameters ext_modules, cmdclass, packages, and entry_points as required in setuptools.setup. """ cmake_cache_vars = get_cmake_cache_vars() ################################################################################ # Configure compile flags ################################################################################ library_dirs = [] extra_install_requires = [] if IS_WINDOWS: # /NODEFAULTLIB makes sure we only link to DLL runtime # and matches the flags set for protobuf and ONNX extra_link_args = ["/NODEFAULTLIB:LIBCMT.LIB"] # /MD links against DLL runtime # and matches the flags set for protobuf and ONNX # /EHsc is about standard C++ exception handling extra_compile_args = ["/MD", "/FS", "/EHsc"] else: extra_link_args = [] extra_compile_args = [ "-Wall", "-Wextra", "-Wno-strict-overflow", "-Wno-unused-parameter", "-Wno-missing-field-initializers", "-Wno-unknown-pragmas", # Python 2.6 requires -fno-strict-aliasing, see # http://legacy.python.org/dev/peps/pep-3123/ # We also depend on it in our code (even Python 3). "-fno-strict-aliasing", ] library_dirs.append(lib_path) main_compile_args = [] main_libraries = ["torch_python"] main_link_args = [] main_sources = ["torch/csrc/stub.c"] if BUILD_LIBTORCH_WHL: main_libraries = ["torch"] main_sources = [] if build_type.is_debug(): if IS_WINDOWS: extra_compile_args.append("/Z7") extra_link_args.append("/DEBUG:FULL") else: extra_compile_args += ["-O0", "-g"] extra_link_args += ["-O0", "-g"] if build_type.is_rel_with_deb_info(): if IS_WINDOWS: extra_compile_args.append("/Z7") extra_link_args.append("/DEBUG:FULL") else: extra_compile_args += ["-g"] extra_link_args += ["-g"] # pypi cuda package that requires installation of cuda runtime, cudnn and cublas # should be included in all wheels uploaded to pypi pytorch_extra_install_requirements = os.getenv( "PYTORCH_EXTRA_INSTALL_REQUIREMENTS", "" ) if pytorch_extra_install_requirements: report( f"pytorch_extra_install_requirements: {pytorch_extra_install_requirements}" ) extra_install_requires += pytorch_extra_install_requirements.split("|") # Cross-compile for M1 if IS_DARWIN: macos_target_arch = os.getenv("CMAKE_OSX_ARCHITECTURES", "") if macos_target_arch in ["arm64", "x86_64"]: macos_sysroot_path = os.getenv("CMAKE_OSX_SYSROOT") if macos_sysroot_path is None: macos_sysroot_path = ( subprocess.check_output( ["xcrun", "--show-sdk-path", "--sdk", "macosx"] ) .decode("utf-8") .strip() ) extra_compile_args += [ "-arch", macos_target_arch, "-isysroot", macos_sysroot_path, ] extra_link_args += ["-arch", macos_target_arch] def make_relative_rpath_args(path): if IS_DARWIN: return ["-Wl,-rpath,@loader_path/" + path] elif IS_WINDOWS: return [] else: return ["-Wl,-rpath,$ORIGIN/" + path] ################################################################################ # Declare extensions and package ################################################################################ extensions = [] excludes = ["tools", "tools.*", "caffe2", "caffe2.*"] if not cmake_cache_vars["BUILD_FUNCTORCH"]: excludes.extend(["functorch", "functorch.*"]) packages = find_packages(exclude=excludes) C = Extension( "torch._C", libraries=main_libraries, sources=main_sources, language="c", extra_compile_args=main_compile_args + extra_compile_args, include_dirs=[], library_dirs=library_dirs, extra_link_args=extra_link_args + main_link_args + make_relative_rpath_args("lib"), ) extensions.append(C) # These extensions are built by cmake and copied manually in build_extensions() # inside the build_ext implementation if cmake_cache_vars["BUILD_FUNCTORCH"]: extensions.append( Extension(name="functorch._C", sources=[]), ) cmdclass = { "bdist_wheel": wheel_concatenate, "build_ext": build_ext, "clean": clean, "install": install, "sdist": sdist, } entry_points = { "console_scripts": [ "torchrun = torch.distributed.run:main", ], "torchrun.logs_specs": [ "default = torch.distributed.elastic.multiprocessing:DefaultLogsSpecs", ], } if cmake_cache_vars["USE_DISTRIBUTED"]: # Only enable fr_trace command if distributed is enabled entry_points["console_scripts"].append( "torchfrtrace = tools.flight_recorder.fr_trace:main", ) return extensions, cmdclass, packages, entry_points, extra_install_requires # post run, warnings, printed at the end to make them more visible build_update_message = """ It is no longer necessary to use the 'build' or 'rebuild' targets To install: $ python setup.py install To develop locally: $ python setup.py develop To force cmake to re-generate native build files (off by default): $ python setup.py develop --cmake """ def print_box(msg): lines = msg.split("\n") size = max(len(l) + 1 for l in lines) print("-" * (size + 2)) for l in lines: print("|{}{}|".format(l, " " * (size - len(l)))) print("-" * (size + 2)) def main(): if BUILD_LIBTORCH_WHL and BUILD_PYTHON_ONLY: raise RuntimeError( "Conflict: 'BUILD_LIBTORCH_WHL' and 'BUILD_PYTHON_ONLY' can't both be 1. Set one to 0 and rerun." ) install_requires = [ "filelock", "typing-extensions>=4.10.0", 'setuptools ; python_version >= "3.12"', "sympy>=1.13.3", "networkx", "jinja2", "fsspec", ] if BUILD_PYTHON_ONLY: install_requires.append(f"{LIBTORCH_PKG_NAME}=={get_torch_version()}") use_prioritized_text = str(os.getenv("USE_PRIORITIZED_TEXT_FOR_LD", "")) if ( use_prioritized_text == "" and platform.system() == "Linux" and platform.processor() == "aarch64" ): print_box( """ WARNING: we strongly recommend enabling linker script optimization for ARM + CUDA. To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1 """ ) if use_prioritized_text == "1" or use_prioritized_text == "True": gen_linker_script( filein="cmake/prioritized_text.txt", fout="cmake/linker_script.ld" ) linker_script_path = os.path.abspath("cmake/linker_script.ld") os.environ["LDFLAGS"] = os.getenv("LDFLAGS", "") + f" -T{linker_script_path}" os.environ["CFLAGS"] = ( os.getenv("CFLAGS", "") + " -ffunction-sections -fdata-sections" ) os.environ["CXXFLAGS"] = ( os.getenv("CXXFLAGS", "") + " -ffunction-sections -fdata-sections" ) # Parse the command line and check the arguments before we proceed with # building deps and setup. We need to set values so `--help` works. dist = Distribution() dist.script_name = os.path.basename(sys.argv[0]) dist.script_args = sys.argv[1:] try: dist.parse_command_line() except setuptools.distutils.errors.DistutilsArgError as e: print(e) sys.exit(1) mirror_files_into_torchgen() if RUN_BUILD_DEPS: build_deps() ( extensions, cmdclass, packages, entry_points, extra_install_requires, ) = configure_extension_build() install_requires += extra_install_requires extras_require = { "optree": ["optree>=0.13.0"], "opt-einsum": ["opt-einsum>=3.3"], "pyyaml": ["pyyaml"], } # Read in README.md for our long_description with open(os.path.join(cwd, "README.md"), encoding="utf-8") as f: long_description = f.read() version_range_max = max(sys.version_info[1], 13) + 1 torch_package_data = [ "py.typed", "bin/*", "test/*", "*.pyi", "**/*.pyi", "lib/*.pdb", "lib/**/*.pdb", "lib/*shm*", "lib/torch_shm_manager", "lib/*.h", "lib/**/*.h", "include/*.h", "include/**/*.h", "include/*.hpp", "include/**/*.hpp", "include/*.cuh", "include/**/*.cuh", "_inductor/codegen/*.h", "_inductor/codegen/aoti_runtime/*.cpp", "_inductor/script.ld", "_export/serde/*.yaml", "_export/serde/*.thrift", "share/cmake/ATen/*.cmake", "share/cmake/Caffe2/*.cmake", "share/cmake/Caffe2/public/*.cmake", "share/cmake/Caffe2/Modules_CUDA_fix/*.cmake", "share/cmake/Caffe2/Modules_CUDA_fix/upstream/*.cmake", "share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA/*.cmake", "share/cmake/Gloo/*.cmake", "share/cmake/Tensorpipe/*.cmake", "share/cmake/Torch/*.cmake", "utils/benchmark/utils/*.cpp", "utils/benchmark/utils/valgrind_wrapper/*.cpp", "utils/benchmark/utils/valgrind_wrapper/*.h", "utils/model_dump/skeleton.html", "utils/model_dump/code.js", "utils/model_dump/*.mjs", ] if not BUILD_LIBTORCH_WHL: torch_package_data.extend( [ "lib/libtorch_python.so", "lib/libtorch_python.dylib", "lib/libtorch_python.dll", ] ) if not BUILD_PYTHON_ONLY: torch_package_data.extend( [ "lib/*.so*", "lib/*.dylib*", "lib/*.dll", "lib/*.lib", ] ) aotriton_image_path = os.path.join(lib_path, "aotriton.images") aks2_files = [] for root, dirs, files in os.walk(aotriton_image_path): subpath = os.path.relpath(root, start=aotriton_image_path) for fn in files: aks2_files.append(os.path.join("lib/aotriton.images", subpath, fn)) torch_package_data += aks2_files if get_cmake_cache_vars()["USE_TENSORPIPE"]: torch_package_data.extend( [ "include/tensorpipe/*.h", "include/tensorpipe/**/*.h", ] ) if get_cmake_cache_vars()["USE_KINETO"]: torch_package_data.extend( [ "include/kineto/*.h", "include/kineto/**/*.h", ] ) torchgen_package_data = [ "packaged/*", "packaged/**/*", ] package_data = { "torch": torch_package_data, } if not BUILD_LIBTORCH_WHL: package_data["torchgen"] = torchgen_package_data else: # no extensions in BUILD_LIBTORCH_WHL mode extensions = [] setup( name=package_name, version=version, description=( "Tensors and Dynamic neural networks in Python with strong GPU acceleration" ), long_description=long_description, long_description_content_type="text/markdown", ext_modules=extensions, cmdclass=cmdclass, packages=packages, entry_points=entry_points, install_requires=install_requires, extras_require=extras_require, package_data=package_data, # TODO fix later Manifest.IN file was previously ignored include_package_data=False, # defaults to True with pyproject.toml file url="https://pytorch.org/", download_url="https://github.com/pytorch/pytorch/tags", author="PyTorch Team", author_email="[email protected]", python_requires=f">={python_min_version_str}", # PyPI package information. classifiers=[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audience :: Education", "Intended Audience :: Science/Research", "License :: OSI Approved :: BSD License", "Topic :: Scientific/Engineering", "Topic :: Scientific/Engineering :: Mathematics", "Topic :: Scientific/Engineering :: Artificial Intelligence", "Topic :: Software Development", "Topic :: Software Development :: Libraries", "Topic :: Software Development :: Libraries :: Python Modules", "Programming Language :: C++", "Programming Language :: Python :: 3", ] + [ f"Programming Language :: Python :: 3.{i}" for i in range(python_min_version[1], version_range_max) ], license="BSD-3-Clause", keywords="pytorch, machine learning", ) if EMIT_BUILD_WARNING: print_box(build_update_message) if __name__ == "__main__": main()
https://github.com/huggingface/transformers
README.md
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <p align="center"> <picture> <source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg"> <source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg"> <img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;"> </picture> <br/> <br/> </p> <p align="center"> <a href="https://huggingface.com/models"><img alt="Checkpoints on Hub" src="https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen"></a> <a href="https://circleci.com/gh/huggingface/transformers"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"></a> <a href="https://github.com/huggingface/transformers/blob/main/LICENSE"><img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"></a> <a href="https://huggingface.co/docs/transformers/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"></a> <a href="https://github.com/huggingface/transformers/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"></a> <a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a> <a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a> </p> <h4 align="center"> <p> <b>English</b> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hans.md">简体中文</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hant.md">繁體中文</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ko.md">한국어</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_es.md">Español</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ja.md">日本語</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_hd.md">हिन्दी</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ru.md">Русский</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_pt-br.md">Рortuguês</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_te.md">తెలుగు</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_fr.md">Français</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_de.md">Deutsch</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_vi.md">Tiếng Việt</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ar.md">العربية</a> | <a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ur.md">اردو</a> | </p> </h4> <h3 align="center"> <p>State-of-the-art pretrained models for inference and training</p> </h3> <h3 align="center"> <a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a> </h3> Transformers is a library of pretrained text, computer vision, audio, video, and multimodal models for inference and training. Use Transformers to fine-tune models on your data, build inference applications, and for generative AI use cases across multiple modalities. There are over 500K+ Transformers [model checkpoints](https://huggingface.co/models?library=transformers&sort=trending) on the [Hugging Face Hub](https://huggingface.com/models) you can use. Explore the [Hub](https://huggingface.com/) today to find a model and use Transformers to help you get started right away. ## Installation Transformers works with Python 3.9+ [PyTorch](https://pytorch.org/get-started/locally/) 2.1+, [TensorFlow](https://www.tensorflow.org/install/pip) 2.6+, and [Flax](https://flax.readthedocs.io/en/latest/) 0.4.1+. Create and activate a virtual environment with [venv](https://docs.python.org/3/library/venv.html) or [uv](https://docs.astral.sh/uv/), a fast Rust-based Python package and project manager. ```py # venv python -m venv .my-env source .my-env/bin/activate # uv uv venv .my-env source .my-env/bin/activate ``` Install Transformers in your virtual environment. ```py # pip pip install "transformers[torch]" # uv uv pip install "transformers[torch]" ``` Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, the *latest* version may not be stable. Feel free to open an [issue](https://github.com/huggingface/transformers/issues) if you encounter an error. ```shell git clone https://github.com/huggingface/transformers.git cd transformers # pip pip install .[torch] # uv uv pip install .[torch] ``` ## Quickstart Get started with Transformers right away with the [Pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial) API. The `Pipeline` is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output. Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model. ```py from transformers import pipeline pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B") pipeline("the secret to baking a really good cake is ") [{'generated_text': 'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}] ``` To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to `Pipeline`) between you and the system. > [!TIP] > You can also chat with a model directly from the command line. > ```shell > transformers chat Qwen/Qwen2.5-0.5B-Instruct > ``` ```py import torch from transformers import pipeline chat = [ {"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."}, {"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"} ] pipeline = pipeline(task="text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", torch_dtype=torch.bfloat16, device_map="auto") response = pipeline(chat, max_new_tokens=512) print(response[0]["generated_text"][-1]["content"]) ``` Expand the examples below to see how `Pipeline` works for different modalities and tasks. <details> <summary>Automatic speech recognition</summary> ```py from transformers import pipeline pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3") pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} ``` </details> <details> <summary>Image classification</summary> <h3 align="center"> <a><img src="https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"></a> </h3> ```py from transformers import pipeline pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer") pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png") [{'label': 'macaw', 'score': 0.997848391532898}, {'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita', 'score': 0.0016551691805943847}, {'label': 'lorikeet', 'score': 0.00018523589824326336}, {'label': 'African grey, African gray, Psittacus erithacus', 'score': 7.85409429227002e-05}, {'label': 'quail', 'score': 5.502637941390276e-05}] ``` </details> <details> <summary>Visual question answering</summary> <h3 align="center"> <a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg"></a> </h3> ```py from transformers import pipeline pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base") pipeline( image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg", question="What is in the image?", ) [{'answer': 'statue of liberty'}] ``` </details> ## Why should I use Transformers? 1. Easy-to-use state-of-the-art models: - High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks. - Low barrier to entry for researchers, engineers, and developers. - Few user-facing abstractions with just three classes to learn. - A unified API for using all our pretrained models. 1. Lower compute costs, smaller carbon footprint: - Share trained models instead of training from scratch. - Reduce compute time and production costs. - Dozens of model architectures with 1M+ pretrained checkpoints across all modalities. 1. Choose the right framework for every part of a models lifetime: - Train state-of-the-art models in 3 lines of code. - Move a single model between PyTorch/JAX/TF2.0 frameworks at will. - Pick the right framework for training, evaluation, and production. 1. Easily customize a model or an example to your needs: - We provide examples for each architecture to reproduce the results published by its original authors. - Model internals are exposed as consistently as possible. - Model files can be used independently of the library for quick experiments. <a target="_blank" href="https://huggingface.co/enterprise"> <img alt="Hugging Face Enterprise Hub" src="https://github.com/user-attachments/assets/247fb16d-d251-4583-96c4-d3d76dda4925"> </a><br> ## Why shouldn't I use Transformers? - This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files. - The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like [Accelerate](https://huggingface.co/docs/accelerate). - The [example scripts]((https://github.com/huggingface/transformers/tree/main/examples)) are only *examples*. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work. ## 100 projects using Transformers Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects. In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the community with the [awesome-transformers](./awesome-transformers.md) page which lists 100 incredible projects built with Transformers. If you own or use a project that you believe should be part of the list, please open a PR to add it! ## Example models You can test most of our models directly on their [Hub model pages](https://huggingface.co/models). Expand each modality below to see a few example models for various use cases. <details> <summary>Audio</summary> - Audio classification with [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo) - Automatic speech recognition with [Moonshine](https://huggingface.co/UsefulSensors/moonshine) - Keyword spotting with [Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks) - Speech to speech generation with [Moshi](https://huggingface.co/kyutai/moshiko-pytorch-bf16) - Text to audio with [MusicGen](https://huggingface.co/facebook/musicgen-large) - Text to speech with [Bark](https://huggingface.co/suno/bark) </details> <details> <summary>Computer vision</summary> - Automatic mask generation with [SAM](https://huggingface.co/facebook/sam-vit-base) - Depth estimation with [DepthPro](https://huggingface.co/apple/DepthPro-hf) - Image classification with [DINO v2](https://huggingface.co/facebook/dinov2-base) - Keypoint detection with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor) - Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue) - Object detection with [RT-DETRv2](https://huggingface.co/PekingU/rtdetr_v2_r50vd) - Pose Estimation with [VitPose](https://huggingface.co/usyd-community/vitpose-base-simple) - Universal segmentation with [OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_swin_large) - Video classification with [VideoMAE](https://huggingface.co/MCG-NJU/videomae-large) </details> <details> <summary>Multimodal</summary> - Audio or text to text with [Qwen2-Audio](https://huggingface.co/Qwen/Qwen2-Audio-7B) - Document question answering with [LayoutLMv3](https://huggingface.co/microsoft/layoutlmv3-base) - Image or text to text with [Qwen-VL](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) - Image captioning [BLIP-2](https://huggingface.co/Salesforce/blip2-opt-2.7b) - OCR-based document understanding with [GOT-OCR2](https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf) - Table question answering with [TAPAS](https://huggingface.co/google/tapas-base) - Unified multimodal understanding and generation with [Emu3](https://huggingface.co/BAAI/Emu3-Gen) - Vision to text with [Llava-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf) - Visual question answering with [Llava](https://huggingface.co/llava-hf/llava-1.5-7b-hf) - Visual referring expression segmentation with [Kosmos-2](https://huggingface.co/microsoft/kosmos-2-patch14-224) </details> <details> <summary>NLP</summary> - Masked word completion with [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) - Named entity recognition with [Gemma](https://huggingface.co/google/gemma-2-2b) - Question answering with [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) - Summarization with [BART](https://huggingface.co/facebook/bart-large-cnn) - Translation with [T5](https://huggingface.co/google-t5/t5-base) - Text generation with [Llama](https://huggingface.co/meta-llama/Llama-3.2-1B) - Text classification with [Qwen](https://huggingface.co/Qwen/Qwen2.5-0.5B) </details> ## Citation We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library: ```bibtex @inproceedings{wolf-etal-2020-transformers, title = "Transformers: State-of-the-Art Natural Language Processing", author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush", booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", month = oct, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6", pages = "38--45" } ```
https://github.com/huggingface/transformers
conftest.py
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # tests directory-specific settings - this file is run automatically # by pytest before any tests are run import doctest import sys import warnings from os.path import abspath, dirname, join import _pytest import pytest from transformers.testing_utils import HfDoctestModule, HfDocTestParser NOT_DEVICE_TESTS = { "test_tokenization", "test_processor", "test_processing", "test_beam_constraints", "test_configuration_utils", "test_data_collator", "test_trainer_callback", "test_trainer_utils", "test_feature_extraction", "test_image_processing", "test_image_processor", "test_image_transforms", "test_optimization", "test_retrieval", "test_config", "test_from_pretrained_no_checkpoint", "test_keep_in_fp32_modules", "test_gradient_checkpointing_backward_compatibility", "test_gradient_checkpointing_enable_disable", "test_torch_save_load", "test_initialization", "test_forward_signature", "test_model_get_set_embeddings", "test_model_main_input_name", "test_correct_missing_keys", "test_tie_model_weights", "test_can_use_safetensors", "test_load_save_without_tied_weights", "test_tied_weights_keys", "test_model_weights_reload_no_missing_tied_weights", "test_mismatched_shapes_have_properly_initialized_weights", "test_matched_shapes_have_loaded_weights_when_some_mismatched_shapes_exist", "test_model_is_small", "test_tf_from_pt_safetensors", "test_flax_from_pt_safetensors", "ModelTest::test_pipeline_", # None of the pipeline tests from PipelineTesterMixin (of which XxxModelTest inherits from) are running on device "ModelTester::test_pipeline_", "/repo_utils/", "/utils/", } # allow having multiple repository checkouts and not needing to remember to rerun # `pip install -e '.[dev]'` when switching between checkouts and running tests. git_repo_path = abspath(join(dirname(__file__), "src")) sys.path.insert(1, git_repo_path) # silence FutureWarning warnings in tests since often we can't act on them until # they become normal warnings - i.e. the tests still need to test the current functionality warnings.simplefilter(action="ignore", category=FutureWarning) def pytest_configure(config): config.addinivalue_line("markers", "is_pipeline_test: mark test to run only when pipelines are tested") config.addinivalue_line("markers", "is_staging_test: mark test to run only in the staging environment") config.addinivalue_line("markers", "accelerate_tests: mark test that require accelerate") config.addinivalue_line("markers", "not_device_test: mark the tests always running on cpu") def pytest_collection_modifyitems(items): for item in items: if any(test_name in item.nodeid for test_name in NOT_DEVICE_TESTS): item.add_marker(pytest.mark.not_device_test) def pytest_addoption(parser): from transformers.testing_utils import pytest_addoption_shared pytest_addoption_shared(parser) def pytest_terminal_summary(terminalreporter): from transformers.testing_utils import pytest_terminal_summary_main make_reports = terminalreporter.config.getoption("--make-reports") if make_reports: pytest_terminal_summary_main(terminalreporter, id=make_reports) def pytest_sessionfinish(session, exitstatus): # If no tests are collected, pytest exists with code 5, which makes the CI fail. if exitstatus == 5: session.exitstatus = 0 # Doctest custom flag to ignore output. IGNORE_RESULT = doctest.register_optionflag("IGNORE_RESULT") OutputChecker = doctest.OutputChecker class CustomOutputChecker(OutputChecker): def check_output(self, want, got, optionflags): if IGNORE_RESULT & optionflags: return True return OutputChecker.check_output(self, want, got, optionflags) doctest.OutputChecker = CustomOutputChecker _pytest.doctest.DoctestModule = HfDoctestModule doctest.DocTestParser = HfDocTestParser
https://github.com/huggingface/transformers
examples/3D_parallel.py
# Copyright 2024 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """: This script is used to test training a model using Tensor Parallelism and Data Parallelism. Usage: export CUDA_VISIBLE_DEVICES=0,1,2,3 export CUDA_VISIBLE_DEVICES=4,5,6,7 export CUDA_VISIBLE_DEVICES=5,6,7 TP_SIZE=2 DP_SIZE=2 torchrun --nproc_per_node=4 --rdzv_endpoint=localhost:29503 examples/3D_parallel.py CP_SIZE=2 DP_SIZE=2 torchrun --nproc_per_node=4 examples/3D_parallel.py CP_SIZE=2 TP_SIZE=2 torchrun --nproc_per_node=4 examples/3D_parallel.py DP_SIZE=2 CP_SIZE=2 TP_SIZE=2 torchrun --nproc_per_node=8 examples/3D_parallel.py TP_SIZE=1 CP_SIZE=4 torchrun --nproc_per_node=4 examples/3D_parallel.py TP_SIZE=1 DP_SIZE=4 torchrun --nproc_per_node=4 examples/3D_parallel.py TP_SIZE=4 DP_SIZE=1 torchrun --nproc_per_node=4 --rdzv_endpoint=localhost:29503 examples/3D_parallel.py IGNORE_SANITY=1 CP_SIZE=1 TP_SIZE=1 DP_SIZE=1 torchrun --nproc_per_node=1 --rdzv_endpoint=localhost:29504 examples/3D_parallel.py ocalhost:29504 test_train.py """ import logging import os from collections.abc import Iterable from contextlib import nullcontext import torch import torch.distributed as dist import torch.distributed.checkpoint as dcp import torch.optim as optim import wandb from datasets import load_dataset from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict from torch.distributed.checkpoint.stateful import Stateful from torch.distributed.device_mesh import DeviceMesh from torch.distributed.fsdp import FullyShardedDataParallel as FSDP from torch.distributed.fsdp import ShardingStrategy from torch.distributed.tensor import DTensor from torch.distributed.tensor.experimental import context_parallel from torch.nn.attention import SDPBackend, sdpa_kernel from torch.utils.data import DataLoader from torch.utils.data.distributed import DistributedSampler from transformers import AutoModelForCausalLM, AutoTokenizer # torch.use_deterministic_algorithms(True) torch.backends.cudnn.deterministic = True # Set up logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger = logging.getLogger(__name__) # from torch.distributed.tensor.experimental._attention import set_rotate_method # set_rotate_method("alltoall") # CP rotate shards using all-to-all def main(): tp_size = int(os.environ.get("TP_SIZE", 1)) dp_size = int(os.environ.get("DP_SIZE", 1)) cp_size = int(os.environ.get("CP_SIZE", 1)) # Add CP size configuration sdpa_backend = SDPBackend.FLASH_ATTENTION # For CP # sdpa_backend = SDPBackend.MATH # For CP global_batch_size = 8 # Desired global batch size seq_len = 1024 # Sequence length num_train_steps = 10000 # Number of training steps LR = 1e-5 model_name = "HuggingFaceTB/SmolLM2-1.7B" # model_name = "unsloth/Llama-3.2-1B" CHECKPOINT_DIR = f"checkpoint_tp{tp_size}_dp{dp_size}_cp{cp_size}" # Initialize distributed environment if "RANK" in os.environ and "WORLD_SIZE" in os.environ: dist.init_process_group("nccl") rank = dist.get_rank() world_size = dist.get_world_size() local_rank = int(os.environ["LOCAL_RANK"]) torch.cuda.set_device(local_rank) assert world_size == tp_size * dp_size * cp_size, ( f"World size ({world_size}) must equal TP size ({tp_size}) * DP size ({dp_size}) * CP size ({cp_size})" ) mesh = torch.arange(world_size).reshape(dp_size, tp_size, cp_size) world_mesh = DeviceMesh(device_type="cuda", mesh=mesh, mesh_dim_names=("dp", "tp", "cp")) tp_mesh = world_mesh["tp"] dp_mesh = world_mesh["dp"] cp_mesh = world_mesh["cp"] world_mesh["dp", "cp"]._flatten(mesh_dim_name="dp_cp") logger.info(f"Created DeviceMesh: {world_mesh}") logger.info( f"Distributed setup - Rank: {rank}, World size: {world_size}, Local rank: {local_rank}, DP: {dp_mesh.get_local_rank()}, TP: {tp_mesh.get_local_rank()}, CP: {cp_mesh.get_local_rank()}" ) if dist.get_rank() == 0: wandb.init( project="tp_dp_test", config={ "tp_size": tp_size, "dp_size": dp_size, "cp_size": cp_size, "global_batch_size": global_batch_size, "model_name": model_name, "dataset": "roneneldan/TinyStories-1M", "seq_len": seq_len, "lr": LR, "weight_decay": 0.1, }, name=f"llama_tp{tp_size}_dp{dp_size}_cp{cp_size}" if model_name == "unsloth/Llama-3.2-1B" else f"tp{tp_size}_dp{dp_size}_cp{cp_size}", ) logger.info("Wandb initialized.") # Log the current file to wandb wandb.save("test_train.py") # Load model and tokenizer logger.info(f"Loading model and tokenizer from {model_name}") tokenizer = AutoTokenizer.from_pretrained(model_name) if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token logger.info(f"Set pad_token to eos_token: {tokenizer.pad_token}") model = AutoModelForCausalLM.from_pretrained( model_name, device_mesh=tp_mesh if dist.is_initialized() else None, tp_plan="auto", torch_dtype=torch.bfloat16, ) logger.info(f"Model loaded onto device mesh: {tp_mesh}") device = torch.device(f"cuda:{local_rank}") logger.info(f"Using device: {device} for non-model tensors") use_ddp = False if dist.is_initialized() and dp_mesh.size() > 1: model = FSDP(model, device_mesh=dp_mesh, sharding_strategy=ShardingStrategy.NO_SHARD) use_ddp = True pass model.train() logger.info("Loading TinyStories dataset...") raw_dataset = load_dataset("roneneldan/TinyStories", split="train[:1%]") # Use 1% for faster testing def tokenize_function(examples): # Tokenize the text without padding tokenized_batch = tokenizer( examples["text"], padding=False, truncation=True, max_length=seq_len, return_tensors=None ) # Set labels to be the same as input_ids for Causal LM tokenized_batch["labels"] = tokenized_batch["input_ids"].copy() return tokenized_batch tokenized_dataset = raw_dataset.map(tokenize_function, batched=True, remove_columns=["text"]) logger.info(f"Dataset loaded and tokenized. Size: {len(tokenized_dataset)}") # Create packed sequences def create_packed_sequences(examples): # Flatten all sequences all_tokens = [] for input_ids in examples["input_ids"]: all_tokens.extend(input_ids) # Split into sequences of seq_len + 1 (for input + label) num_sequences = len(all_tokens) // (seq_len + 1) packed_input_ids = [] packed_labels = [] for i in range(num_sequences): start_idx = i * (seq_len + 1) end_idx = start_idx + (seq_len + 1) # Get the full sequence full_sequence = all_tokens[start_idx:end_idx] # For input_ids, remove the last token packed_input_ids.append(full_sequence[:-1]) # For labels, remove the first token packed_labels.append(full_sequence[1:]) return {"input_ids": packed_input_ids, "labels": packed_labels} # Apply packing to the dataset packed_dataset = tokenized_dataset.map( create_packed_sequences, batched=True, remove_columns=tokenized_dataset.column_names, batch_size=1000, # Process in batches for efficiency num_proc=60, ) logger.info(f"Dataset packed. New size: {len(packed_dataset)}") # Shuffle the packed dataset packed_dataset = packed_dataset.shuffle(seed=42) logger.info("Packed dataset shuffled") # Calculate local batch size if dist.is_initialized(): assert global_batch_size % dp_mesh.size() == 0, ( f"Global batch size ({global_batch_size}) must be divisible by DP size ({dp_mesh.size()})" ) local_batch_size = global_batch_size // dp_mesh.size() else: local_batch_size = global_batch_size logger.info( f"Global batch size: {global_batch_size}, DP size: {dp_size if dist.is_initialized() else 1}, Local batch size: {local_batch_size}" ) # Simple collate function since sequences are already packed def collate_fn(batch): input_ids = torch.tensor([item["input_ids"] for item in batch], dtype=torch.long) labels = torch.tensor([item["labels"] for item in batch], dtype=torch.long) return {"input_ids": input_ids, "labels": labels} if dist.is_initialized(): sampler = DistributedSampler( packed_dataset, num_replicas=dp_mesh.size(), rank=dp_mesh.get_local_rank(), shuffle=False ) else: sampler = None dataloader = DataLoader( packed_dataset, batch_size=local_batch_size, sampler=sampler, shuffle=False, collate_fn=collate_fn, pin_memory=True, ) logger.info(f"DataLoader created. Distributed: {dist.is_initialized()}") optimizer = optim.AdamW(model.parameters(), lr=LR, weight_decay=0.1) # Training loop logger.info(f"Starting training for {num_train_steps} steps...") model.train() step = 0 while step < num_train_steps: for batch in dataloader: if step >= num_train_steps: break # Exit loop if max steps reached # Move batch to appropriate device batch = {k: v.to(device) for k, v in batch.items()} optimizer.zero_grad() # Add position_ids to batch before CP sharding batch_size = batch["input_ids"].shape[0] position_ids = torch.arange(0, seq_len, dtype=torch.long, device=device) position_ids = position_ids.unsqueeze(0).expand(batch_size, -1) batch["position_ids"] = position_ids from torch.distributed.tensor.experimental._attention import _cp_options _cp_options.enable_load_balance = False with sdpa_kernel(sdpa_backend): # TODO: ideally move this to attention implementation cp_context = ( nullcontext() if cp_mesh.size() == 1 else context_parallel( cp_mesh, buffers=[ batch["input_ids"], batch["labels"], batch["position_ids"], ], buffer_seq_dims=[1, 1, 1], ) ) with cp_context: # Pop labels from batch before model forward pass labels = batch.pop("labels") outputs = model(**batch) # [mbs, seq_len/cp] loss = outputs.loss logits = outputs.logits # Compute loss with shifted labels loss = model.loss_function( logits=logits, labels=None, shift_labels=labels, vocab_size=model.config.vocab_size ) loss.backward() # all reduce grads across dp_cp if applicable all_reduce_grads(model, world_mesh, use_ddp=use_ddp) if hasattr(model, "clip_grad_norm_"): gradnorm = model.clip_grad_norm_(max_norm=1.0, norm_type=2.0) # TODO: fix reported gradnorm else: # only works with FSDP's NO_SHARD otherwise we should use FSDP's clip_grad_norm_ assert len(list(model.parameters())) > 5, "No parameters found in model. Probably DDP bug.." gradnorm = clip_grad_norm_(model.parameters(), max_norm=1.0, norm_type=2.0, foreach=True) optimizer.step() # allreduce loss across cp_dp before logging if dist.is_initialized() and (cp_mesh.size() > 1 or dp_mesh.size() > 1): dist.all_reduce(loss, group=world_mesh["dp_cp"].get_group(), op=dist.ReduceOp.AVG) current_loss = loss.item() # Log loss and gradnorm to wandb (only on rank 0 of dp group) if not dist.is_initialized() or dist.get_rank() == 0: logger.info( f"Step: {step} | GBS: {global_batch_size} | DP: {dp_mesh.size()} | TP: {tp_mesh.size()} | CP: {cp_mesh.size()} | Loss: {current_loss} | Gradnorm: {gradnorm} | lr: {LR}" ) wandb.log( { "train/loss": current_loss, "train/gradnorm": gradnorm, "step": step, "lr": LR, "GBS": global_batch_size, } ) step += 1 # Increment step count logger.info("Training loop finished.") # Save model using DCP (only if distributed) if dist.is_initialized(): state_dict = {"app": AppState(model, optimizer)} dcp.save( state_dict=state_dict, checkpoint_id=CHECKPOINT_DIR, ) logger.info(f"Saved checkpoint to {CHECKPOINT_DIR}") else: # Fallback to regular save for non-distributed case save_dir = "test_model_nondist" model.save_pretrained(save_dir, safe_serialization=False) tokenizer.save_pretrained(save_dir) # Save tokenizer too logger.info(f"Saved model to {save_dir}") dist.destroy_process_group() logger.info("Cleaned up distributed process group") # Finish wandb run on rank 0 if dist.get_rank() == 0: wandb.finish() logger.info("Wandb run finished.") def all_reduce_grads(model, world_mesh, use_ddp): """All reduce gradients across dp_cp if applicable.""" cp_mesh = world_mesh["cp"] if use_ddp: # DDP/FSDP takes care of syncing grads mesh = cp_mesh else: mesh = world_mesh["dp", "cp"]._flatten(mesh_dim_name="dp_cp") if dist.is_initialized() and mesh.size() > 1: for name, param in model.named_parameters(): if param.grad is not None: # Workaround for cross-mesh communication limitation with DTensor gradients if isinstance(param.grad, DTensor): local_grad = param.grad.to_local() # Ensure grad requires grad for inplace modification checks (might not be needed) # local_grad = local_grad.detach().requires_grad_(True) torch.distributed.all_reduce(local_grad, op=torch.distributed.ReduceOp.SUM, group=mesh.get_group()) local_grad = local_grad / mesh.size() # Assign averaged grad back - need careful handling if DTensor structure is complex # This simple assignment might work if the grad structure matches param structure param.grad = DTensor.from_local( local_grad, device_mesh=param.grad.device_mesh, placements=param.grad.placements ) else: # Handle regular tensors if any exist (e.g. buffers not converted to DTensor) torch.distributed.all_reduce(param.grad, op=torch.distributed.ReduceOp.AVG, group=mesh.get_group()) class AppState(Stateful): """Wrapper for checkpointing the Application State including model and optimizer.""" def __init__(self, model, optimizer=None): self.model = model self.optimizer = optimizer def state_dict(self): model_state_dict, optimizer_state_dict = get_state_dict(self.model, self.optimizer) return {"model": model_state_dict, "optim": optimizer_state_dict} def load_state_dict(self, state_dict): set_state_dict( self.model, self.optimizer, model_state_dict=state_dict["model"], optim_state_dict=state_dict["optim"] ) def clip_grad_norm_( parameters: Iterable[torch.Tensor], max_norm: float, norm_type: float = 2.0, error_if_nonfinite: bool = False, foreach: bool | None = None, ) -> torch.Tensor: """ Clip the gradient norm of an iterable of parameters. """ # Filter out parameters with no gradients parameters = [p for p in parameters if p.grad is not None] assert len(parameters) > 0, "No parameters with gradients found" # Calculate total norm if norm_type == float("inf"): total_norm = max(p.grad.detach().abs().max() for p in parameters) else: total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type) # Convert DTensor to local tensor if needed if isinstance(total_norm, DTensor): total_norm = total_norm.full_tensor() # Clip gradients clip_coef = max_norm / (total_norm + 1e-6) if clip_coef < 1: for p in parameters: p.grad.detach().mul_(clip_coef) return total_norm if __name__ == "__main__": main()
https://github.com/huggingface/transformers
examples/run_on_remote.py
#!/usr/bin/env python # Copyright 2021 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import shlex import runhouse as rh if __name__ == "__main__": # Refer to https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup for cloud access # setup instructions, if using on-demand hardware # If user passes --user <user> --host <host> --key_path <key_path> <example> <args>, fill them in as BYO cluster # If user passes --instance <instance> --provider <provider> <example> <args>, fill them in as on-demand cluster # Throw an error if user passes both BYO and on-demand cluster args # Otherwise, use default values parser = argparse.ArgumentParser() parser.add_argument("--user", type=str, default="ubuntu") parser.add_argument("--host", type=str, default="localhost") parser.add_argument("--key_path", type=str, default=None) parser.add_argument("--instance", type=str, default="V100:1") parser.add_argument("--provider", type=str, default="cheapest") parser.add_argument("--use_spot", type=bool, default=False) parser.add_argument("--example", type=str, default="pytorch/text-generation/run_generation.py") args, unknown = parser.parse_known_args() if args.host != "localhost": if args.instance != "V100:1" or args.provider != "cheapest": raise ValueError("Cannot specify both BYO and on-demand cluster args") cluster = rh.cluster( name="rh-cluster", ips=[args.host], ssh_creds={"ssh_user": args.user, "ssh_private_key": args.key_path} ) else: cluster = rh.cluster( name="rh-cluster", instance_type=args.instance, provider=args.provider, use_spot=args.use_spot ) example_dir = args.example.rsplit("/", 1)[0] # Set up remote environment cluster.install_packages(["pip:./"]) # Installs transformers from local source # Note transformers is copied into the home directory on the remote machine, so we can install from there cluster.run([f"pip install -r transformers/examples/{example_dir}/requirements.txt"]) cluster.run(["pip install torch --upgrade --extra-index-url https://download.pytorch.org/whl/cu117"]) # Run example. You can bypass the CLI wrapper and paste your own code here. cluster.run([f"python transformers/examples/{args.example} {shlex.join(unknown)}"]) # Alternatively, we can just import and run a training function (especially if there's no wrapper CLI): # from my_script... import train # reqs = ['pip:./', 'torch', 'datasets', 'accelerate', 'evaluate', 'tqdm', 'scipy', 'scikit-learn', 'tensorboard'] # launch_train_gpu = rh.function(fn=train, # system=gpu, # reqs=reqs, # name='train_bert_glue') # # We can pass in arguments just like we would to a function: # launch_train_gpu(num_epochs = 3, lr = 2e-5, seed = 42, batch_size = 16 # stream_logs=True)
https://github.com/huggingface/transformers
setup.py
"# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License(...TRUNCATED)
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
3