repo
stringlengths 30
45
| file_path
stringlengths 8
50
| text
stringlengths 426
66.5k
|
|---|---|---|
https://github.com/scikit-learn/scikit-learn
|
README.md
|
.. -*- mode: rst -*-
|Azure| |Codecov| |CircleCI| |Nightly wheels| |Ruff| |PythonVersion| |PyPi| |DOI| |Benchmark|
.. |Azure| image:: https://dev.azure.com/scikit-learn/scikit-learn/_apis/build/status/scikit-learn.scikit-learn?branchName=main
:target: https://dev.azure.com/scikit-learn/scikit-learn/_build/latest?definitionId=1&branchName=main
.. |CircleCI| image:: https://circleci.com/gh/scikit-learn/scikit-learn/tree/main.svg?style=shield
:target: https://circleci.com/gh/scikit-learn/scikit-learn
.. |Codecov| image:: https://codecov.io/gh/scikit-learn/scikit-learn/branch/main/graph/badge.svg?token=Pk8G9gg3y9
:target: https://codecov.io/gh/scikit-learn/scikit-learn
.. |Nightly wheels| image:: https://github.com/scikit-learn/scikit-learn/actions/workflows/wheels.yml/badge.svg?event=schedule
:target: https://github.com/scikit-learn/scikit-learn/actions?query=workflow%3A%22Wheel+builder%22+event%3Aschedule
.. |Ruff| image:: https://img.shields.io/badge/code%20style-ruff-000000.svg
:target: https://github.com/astral-sh/ruff
.. |PythonVersion| image:: https://img.shields.io/pypi/pyversions/scikit-learn.svg
:target: https://pypi.org/project/scikit-learn/
.. |PyPi| image:: https://img.shields.io/pypi/v/scikit-learn
:target: https://pypi.org/project/scikit-learn
.. |DOI| image:: https://zenodo.org/badge/21369/scikit-learn/scikit-learn.svg
:target: https://zenodo.org/badge/latestdoi/21369/scikit-learn/scikit-learn
.. |Benchmark| image:: https://img.shields.io/badge/Benchmarked%20by-asv-blue
:target: https://scikit-learn.org/scikit-learn-benchmarks
.. |PythonMinVersion| replace:: 3.10
.. |NumPyMinVersion| replace:: 1.22.0
.. |SciPyMinVersion| replace:: 1.8.0
.. |JoblibMinVersion| replace:: 1.2.0
.. |ThreadpoolctlMinVersion| replace:: 3.1.0
.. |MatplotlibMinVersion| replace:: 3.5.0
.. |Scikit-ImageMinVersion| replace:: 0.19.0
.. |PandasMinVersion| replace:: 1.4.0
.. |SeabornMinVersion| replace:: 0.9.0
.. |PytestMinVersion| replace:: 7.1.2
.. |PlotlyMinVersion| replace:: 5.14.0
.. image:: https://raw.githubusercontent.com/scikit-learn/scikit-learn/main/doc/logos/scikit-learn-logo.png
:target: https://scikit-learn.org/
**scikit-learn** is a Python module for machine learning built on top of
SciPy and is distributed under the 3-Clause BSD license.
The project was started in 2007 by David Cournapeau as a Google Summer
of Code project, and since then many volunteers have contributed. See
the `About us <https://scikit-learn.org/dev/about.html#authors>`__ page
for a list of core contributors.
It is currently maintained by a team of volunteers.
Website: https://scikit-learn.org
Installation
------------
Dependencies
~~~~~~~~~~~~
scikit-learn requires:
- Python (>= |PythonMinVersion|)
- NumPy (>= |NumPyMinVersion|)
- SciPy (>= |SciPyMinVersion|)
- joblib (>= |JoblibMinVersion|)
- threadpoolctl (>= |ThreadpoolctlMinVersion|)
=======
Scikit-learn plotting capabilities (i.e., functions start with ``plot_`` and
classes end with ``Display``) require Matplotlib (>= |MatplotlibMinVersion|).
For running the examples Matplotlib >= |MatplotlibMinVersion| is required.
A few examples require scikit-image >= |Scikit-ImageMinVersion|, a few examples
require pandas >= |PandasMinVersion|, some examples require seaborn >=
|SeabornMinVersion| and plotly >= |PlotlyMinVersion|.
User installation
~~~~~~~~~~~~~~~~~
If you already have a working installation of NumPy and SciPy,
the easiest way to install scikit-learn is using ``pip``::
pip install -U scikit-learn
or ``conda``::
conda install -c conda-forge scikit-learn
The documentation includes more detailed `installation instructions <https://scikit-learn.org/stable/install.html>`_.
Changelog
---------
See the `changelog <https://scikit-learn.org/dev/whats_new.html>`__
for a history of notable changes to scikit-learn.
Development
-----------
We welcome new contributors of all experience levels. The scikit-learn
community goals are to be helpful, welcoming, and effective. The
`Development Guide <https://scikit-learn.org/stable/developers/index.html>`_
has detailed information about contributing code, documentation, tests, and
more. We've included some basic information in this README.
Important links
~~~~~~~~~~~~~~~
- Official source code repo: https://github.com/scikit-learn/scikit-learn
- Download releases: https://pypi.org/project/scikit-learn/
- Issue tracker: https://github.com/scikit-learn/scikit-learn/issues
Source code
~~~~~~~~~~~
You can check the latest sources with the command::
git clone https://github.com/scikit-learn/scikit-learn.git
Contributing
~~~~~~~~~~~~
To learn more about making a contribution to scikit-learn, please see our
`Contributing guide
<https://scikit-learn.org/dev/developers/contributing.html>`_.
Testing
~~~~~~~
After installation, you can launch the test suite from outside the source
directory (you will need to have ``pytest`` >= |PyTestMinVersion| installed)::
pytest sklearn
See the web page https://scikit-learn.org/dev/developers/contributing.html#testing-and-improving-test-coverage
for more information.
Random number generation can be controlled during testing by setting
the ``SKLEARN_SEED`` environment variable.
Submitting a Pull Request
~~~~~~~~~~~~~~~~~~~~~~~~~
Before opening a Pull Request, have a look at the
full Contributing page to make sure your code complies
with our guidelines: https://scikit-learn.org/stable/developers/index.html
Project History
---------------
The project was started in 2007 by David Cournapeau as a Google Summer
of Code project, and since then many volunteers have contributed. See
the `About us <https://scikit-learn.org/dev/about.html#authors>`__ page
for a list of core contributors.
The project is currently maintained by a team of volunteers.
**Note**: `scikit-learn` was previously referred to as `scikits.learn`.
Help and Support
----------------
Documentation
~~~~~~~~~~~~~
- HTML documentation (stable release): https://scikit-learn.org
- HTML documentation (development version): https://scikit-learn.org/dev/
- FAQ: https://scikit-learn.org/stable/faq.html
Communication
~~~~~~~~~~~~~
Main Channels
^^^^^^^^^^^^^
- **Website**: https://scikit-learn.org
- **Blog**: https://blog.scikit-learn.org
- **Mailing list**: https://mail.python.org/mailman/listinfo/scikit-learn
Developer & Support
^^^^^^^^^^^^^^^^^^^^^^
- **GitHub Discussions**: https://github.com/scikit-learn/scikit-learn/discussions
- **Stack Overflow**: https://stackoverflow.com/questions/tagged/scikit-learn
- **Discord**: https://discord.gg/h9qyrK8Jc8
Social Media Platforms
^^^^^^^^^^^^^^^^^^^^^^
- **LinkedIn**: https://www.linkedin.com/company/scikit-learn
- **YouTube**: https://www.youtube.com/channel/UCJosFjYm0ZYVUARxuOZqnnw/playlists
- **Facebook**: https://www.facebook.com/scikitlearnofficial/
- **Instagram**: https://www.instagram.com/scikitlearnofficial/
- **TikTok**: https://www.tiktok.com/@scikit.learn
- **Bluesky**: https://bsky.app/profile/scikit-learn.org
- **Mastodon**: https://mastodon.social/@[email protected]
Resources
^^^^^^^^^
- **Calendar**: https://blog.scikit-learn.org/calendar/
- **Logos & Branding**: https://github.com/scikit-learn/scikit-learn/tree/main/doc/logos
Citation
~~~~~~~~
If you use scikit-learn in a scientific publication, we would appreciate citations: https://scikit-learn.org/stable/about.html#citing-scikit-learn
|
https://github.com/tensorflow/tensorflow
|
README.md
|
<div align="center">
<img src="https://www.tensorflow.org/images/tf_logo_horizontal.png">
</div>
[](https://badge.fury.io/py/tensorflow)
[](https://badge.fury.io/py/tensorflow)
[](https://doi.org/10.5281/zenodo.4724125)
[](https://bestpractices.coreinfrastructure.org/projects/1486)
[](https://securityscorecards.dev/viewer/?uri=github.com/tensorflow/tensorflow)
[](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow)
[](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:tensorflow-py)
[](https://ossrank.com/p/44)
[](CODE_OF_CONDUCT.md)
**`Documentation`** |
------------------- |
[](https://www.tensorflow.org/api_docs/) |
[TensorFlow](https://www.tensorflow.org/) is an end-to-end open source platform
for machine learning. It has a comprehensive, flexible ecosystem of
[tools](https://www.tensorflow.org/resources/tools),
[libraries](https://www.tensorflow.org/resources/libraries-extensions), and
[community](https://www.tensorflow.org/community) resources that lets
researchers push the state-of-the-art in ML and developers easily build and
deploy ML-powered applications.
TensorFlow was originally developed by researchers and engineers working within
the Machine Intelligence team at Google Brain to conduct research in machine
learning and neural networks. However, the framework is versatile enough to be
used in other areas as well.
TensorFlow provides stable [Python](https://www.tensorflow.org/api_docs/python)
and [C++](https://www.tensorflow.org/api_docs/cc) APIs, as well as a
non-guaranteed backward compatible API for
[other languages](https://www.tensorflow.org/api_docs).
Keep up-to-date with release announcements and security updates by subscribing
to
[[email protected]](https://groups.google.com/a/tensorflow.org/forum/#!forum/announce).
See all the [mailing lists](https://www.tensorflow.org/community/forums).
## Install
See the [TensorFlow install guide](https://www.tensorflow.org/install) for the
[pip package](https://www.tensorflow.org/install/pip), to
[enable GPU support](https://www.tensorflow.org/install/gpu), use a
[Docker container](https://www.tensorflow.org/install/docker), and
[build from source](https://www.tensorflow.org/install/source).
To install the current release, which includes support for
[CUDA-enabled GPU cards](https://www.tensorflow.org/install/gpu) *(Ubuntu and
Windows)*:
```
$ pip install tensorflow
```
Other devices (DirectX and MacOS-metal) are supported using
[Device plugins](https://www.tensorflow.org/install/gpu_plugins#available_devices).
A smaller CPU-only package is also available:
```
$ pip install tensorflow-cpu
```
To update TensorFlow to the latest version, add `--upgrade` flag to the above
commands.
*Nightly binaries are available for testing using the
[tf-nightly](https://pypi.python.org/pypi/tf-nightly) and
[tf-nightly-cpu](https://pypi.python.org/pypi/tf-nightly-cpu) packages on PyPI.*
#### *Try your first TensorFlow program*
```shell
$ python
```
```python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'
```
For more examples, see the
[TensorFlow tutorials](https://www.tensorflow.org/tutorials/).
## Contribution guidelines
**If you want to contribute to TensorFlow, be sure to review the
[contribution guidelines](CONTRIBUTING.md). This project adheres to TensorFlow's
[code of conduct](CODE_OF_CONDUCT.md). By participating, you are expected to
uphold this code.**
**We use [GitHub issues](https://github.com/tensorflow/tensorflow/issues) for
tracking requests and bugs, please see
[TensorFlow Forum](https://discuss.tensorflow.org/) for general questions and
discussion, and please direct specific questions to
[Stack Overflow](https://stackoverflow.com/questions/tagged/tensorflow).**
The TensorFlow project strives to abide by generally accepted best practices in
open-source software development.
## Patching guidelines
Follow these steps to patch a specific version of TensorFlow, for example, to
apply fixes to bugs or security vulnerabilities:
* Clone the TensorFlow repo and switch to the corresponding branch for your
desired TensorFlow version, for example, branch `r2.8` for version 2.8.
* Apply (that is, cherry-pick) the desired changes and resolve any code
conflicts.
* Run TensorFlow tests and ensure they pass.
* [Build](https://www.tensorflow.org/install/source) the TensorFlow pip
package from source.
## Continuous build status
You can find more community-supported platforms and configurations in the
[TensorFlow SIG Build community builds table](https://github.com/tensorflow/build#community-supported-tensorflow-builds).
### Official Builds
Build Type | Status | Artifacts
----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------
**Linux CPU** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-cc.html) | [PyPI](https://pypi.org/project/tf-nightly/)
**Linux GPU** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-gpu-py3.html) | [PyPI](https://pypi.org/project/tf-nightly-gpu/)
**Linux XLA** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/ubuntu-xla.html) | TBA
**macOS** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/macos-py2-cc.html) | [PyPI](https://pypi.org/project/tf-nightly/)
**Windows CPU** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/windows-cpu.html) | [PyPI](https://pypi.org/project/tf-nightly/)
**Windows GPU** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/windows-gpu.html) | [PyPI](https://pypi.org/project/tf-nightly-gpu/)
**Android** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/android.html) | [Download](https://bintray.com/google/tensorflow/tensorflow/_latestVersion)
**Raspberry Pi 0 and 1** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/rpi01-py3.html) | [Py3](https://storage.googleapis.com/tensorflow-nightly/tensorflow-1.10.0-cp34-none-linux_armv6l.whl)
**Raspberry Pi 2 and 3** | [](https://storage.googleapis.com/tensorflow-kokoro-build-badges/rpi23-py3.html) | [Py3](https://storage.googleapis.com/tensorflow-nightly/tensorflow-1.10.0-cp34-none-linux_armv7l.whl)
**Libtensorflow MacOS CPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/macos/latest/macos_cpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/)
**Libtensorflow Linux CPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/ubuntu_16/latest/cpu/ubuntu_cpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/)
**Libtensorflow Linux GPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/ubuntu_16/latest/gpu/ubuntu_gpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/)
**Libtensorflow Windows CPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/windows/latest/cpu/windows_cpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/)
**Libtensorflow Windows GPU** | Status Temporarily Unavailable | [Nightly Binary](https://storage.googleapis.com/libtensorflow-nightly/prod/tensorflow/release/windows/latest/gpu/windows_gpu_libtensorflow_binaries.tar.gz) [Official GCS](https://storage.googleapis.com/tensorflow/)
## Resources
* [TensorFlow.org](https://www.tensorflow.org)
* [TensorFlow Tutorials](https://www.tensorflow.org/tutorials/)
* [TensorFlow Official Models](https://github.com/tensorflow/models/tree/master/official)
* [TensorFlow Examples](https://github.com/tensorflow/examples)
* [TensorFlow Codelabs](https://codelabs.developers.google.com/?cat=TensorFlow)
* [TensorFlow Blog](https://blog.tensorflow.org)
* [Learn ML with TensorFlow](https://www.tensorflow.org/resources/learn-ml)
* [TensorFlow Twitter](https://twitter.com/tensorflow)
* [TensorFlow YouTube](https://www.youtube.com/channel/UC0rqucBdTuFTjJiefW5t-IQ)
* [TensorFlow model optimization roadmap](https://www.tensorflow.org/model_optimization/guide/roadmap)
* [TensorFlow White Papers](https://www.tensorflow.org/about/bib)
* [TensorBoard Visualization Toolkit](https://github.com/tensorflow/tensorboard)
* [TensorFlow Code Search](https://cs.opensource.google/tensorflow/tensorflow)
Learn more about the
[TensorFlow community](https://www.tensorflow.org/community) and how to
[contribute](https://www.tensorflow.org/community/contribute).
## Courses
* [Coursera](https://www.coursera.org/search?query=TensorFlow)
* [Udacity](https://www.udacity.com/courses/all?search=TensorFlow)
* [Edx](https://www.edx.org/search?q=TensorFlow)
## License
[Apache License 2.0](LICENSE)
|
https://github.com/tensorflow/tensorflow
|
configure.py
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""configure script to get build parameters from user."""
import argparse
import errno
import json
import os
import platform
import re
import shutil
import subprocess
import sys
_DEFAULT_CUDA_COMPUTE_CAPABILITIES = '3.5,7.0'
_SUPPORTED_ANDROID_NDK_VERSIONS = [19, 20, 21, 25]
_DEFAULT_PROMPT_ASK_ATTEMPTS = 10
_TF_BAZELRC_FILENAME = '.tf_configure.bazelrc'
_TF_WORKSPACE_ROOT = ''
_TF_BAZELRC = ''
_TF_CURRENT_BAZEL_VERSION = None
NCCL_LIB_PATHS = [
'lib64/', 'lib/powerpc64le-linux-gnu/', 'lib/x86_64-linux-gnu/', ''
]
# List of files to configure when building Bazel on Apple platforms.
APPLE_BAZEL_FILES = [
'tensorflow/lite/ios/BUILD', 'tensorflow/lite/objc/BUILD',
'tensorflow/lite/swift/BUILD',
'tensorflow/lite/tools/benchmark/experimental/ios/BUILD'
]
# List of files to move when building for iOS.
IOS_FILES = [
'tensorflow/lite/objc/TensorFlowLiteObjC.podspec',
'tensorflow/lite/swift/TensorFlowLiteSwift.podspec',
]
class UserInputError(Exception):
pass
def is_windows():
return platform.system() == 'Windows'
def is_linux():
return platform.system() == 'Linux'
def is_macos():
return platform.system() == 'Darwin'
def is_ppc64le():
return platform.machine() == 'ppc64le'
def is_s390x():
return platform.machine() == 's390x'
def is_cygwin():
return platform.system().startswith('CYGWIN_NT')
def get_input(question):
try:
try:
answer = raw_input(question)
except NameError:
answer = input(question) # pylint: disable=bad-builtin
except EOFError:
answer = ''
return answer
def symlink_force(target, link_name):
"""Force symlink, equivalent of 'ln -sf'.
Args:
target: items to link to.
link_name: name of the link.
"""
try:
os.symlink(target, link_name)
except OSError as e:
if e.errno == errno.EEXIST:
os.remove(link_name)
os.symlink(target, link_name)
else:
raise e
def write_to_bazelrc(line):
with open(_TF_BAZELRC, 'a') as f:
f.write(line + '\n')
def write_action_env_to_bazelrc(var_name, var):
write_to_bazelrc('build --action_env {}="{}"'.format(var_name, str(var)))
def write_repo_env_to_bazelrc(config_name, var_name, var):
write_to_bazelrc(
'build:{} --repo_env {}="{}"'.format(config_name, var_name, str(var))
)
def run_shell(cmd, allow_non_zero=False, stderr=None):
if stderr is None:
stderr = sys.stdout
if allow_non_zero:
try:
output = subprocess.check_output(cmd, stderr=stderr)
except subprocess.CalledProcessError as e:
output = e.output
else:
output = subprocess.check_output(cmd, stderr=stderr)
return output.decode('UTF-8').strip()
def cygpath(path):
"""Convert path from posix to windows."""
return os.path.abspath(path).replace('\\', '/')
def get_python_path(environ_cp, python_bin_path):
"""Get the python site package paths."""
python_paths = []
if environ_cp.get('PYTHONPATH'):
python_paths = environ_cp.get('PYTHONPATH').split(':')
try:
stderr = open(os.devnull, 'wb')
library_paths = run_shell([
python_bin_path, '-c',
'import site; print("\\n".join(site.getsitepackages()))'
],
stderr=stderr).split('\n')
except subprocess.CalledProcessError:
library_paths = [
run_shell([
python_bin_path,
'-c',
'import sysconfig;print(sysconfig.get_path("purelib")',
])
]
all_paths = set(python_paths + library_paths)
# Sort set so order is deterministic
all_paths = sorted(all_paths)
paths = []
for path in all_paths:
if os.path.isdir(path):
paths.append(path)
return paths
def get_python_major_version(python_bin_path):
"""Get the python major version."""
return run_shell([python_bin_path, '-c', 'import sys; print(sys.version[0])'])
def setup_python(environ_cp):
"""Setup python related env variables."""
# Get PYTHON_BIN_PATH, default is the current running python.
default_python_bin_path = sys.executable
ask_python_bin_path = ('Please specify the location of python. [Default is '
'{}]: ').format(default_python_bin_path)
while True:
python_bin_path = get_from_env_or_user_or_default(environ_cp,
'PYTHON_BIN_PATH',
ask_python_bin_path,
default_python_bin_path)
# Check if the path is valid
if os.path.isfile(python_bin_path) and os.access(python_bin_path, os.X_OK):
break
elif not os.path.exists(python_bin_path):
print('Invalid python path: {} cannot be found.'.format(python_bin_path))
else:
print('{} is not executable. Is it the python binary?'.format(
python_bin_path))
environ_cp['PYTHON_BIN_PATH'] = ''
# Convert python path to Windows style before checking lib and version
if is_windows() or is_cygwin():
python_bin_path = cygpath(python_bin_path)
# Get PYTHON_LIB_PATH
python_lib_path = environ_cp.get('PYTHON_LIB_PATH')
if not python_lib_path:
python_lib_paths = get_python_path(environ_cp, python_bin_path)
if environ_cp.get('USE_DEFAULT_PYTHON_LIB_PATH') == '1':
python_lib_path = python_lib_paths[0]
else:
print('Found possible Python library paths:\n %s' %
'\n '.join(python_lib_paths))
default_python_lib_path = python_lib_paths[0]
python_lib_path = get_input(
'Please input the desired Python library path to use. '
'Default is [{}]\n'.format(python_lib_paths[0]))
if not python_lib_path:
python_lib_path = default_python_lib_path
environ_cp['PYTHON_LIB_PATH'] = python_lib_path
python_major_version = get_python_major_version(python_bin_path)
if python_major_version == '2':
write_to_bazelrc('build --host_force_python=PY2')
# Convert python path to Windows style before writing into bazel.rc
if is_windows() or is_cygwin():
python_lib_path = cygpath(python_lib_path)
# Set-up env variables used by python_configure.bzl
write_action_env_to_bazelrc('PYTHON_BIN_PATH', python_bin_path)
write_action_env_to_bazelrc('PYTHON_LIB_PATH', python_lib_path)
write_to_bazelrc('build --python_path=\"{}"'.format(python_bin_path))
environ_cp['PYTHON_BIN_PATH'] = python_bin_path
# If chosen python_lib_path is from a path specified in the PYTHONPATH
# variable, need to tell bazel to include PYTHONPATH
if environ_cp.get('PYTHONPATH'):
python_paths = environ_cp.get('PYTHONPATH').split(':')
if python_lib_path in python_paths:
write_action_env_to_bazelrc('PYTHONPATH', environ_cp.get('PYTHONPATH'))
# Write tools/python_bin_path.sh
with open(
os.path.join(_TF_WORKSPACE_ROOT, 'tools', 'python_bin_path.sh'),
'w') as f:
f.write('export PYTHON_BIN_PATH="{}"'.format(python_bin_path))
def reset_tf_configure_bazelrc():
"""Reset file that contains customized config settings."""
open(_TF_BAZELRC, 'w').close()
def cleanup_makefile():
"""Delete any leftover BUILD files from the Makefile build.
These files could interfere with Bazel parsing.
"""
makefile_download_dir = os.path.join(_TF_WORKSPACE_ROOT, 'tensorflow',
'contrib', 'makefile', 'downloads')
if os.path.isdir(makefile_download_dir):
for root, _, filenames in os.walk(makefile_download_dir):
for f in filenames:
if f.endswith('BUILD'):
os.remove(os.path.join(root, f))
def get_var(environ_cp,
var_name,
query_item,
enabled_by_default,
question=None,
yes_reply=None,
no_reply=None):
"""Get boolean input from user.
If var_name is not set in env, ask user to enable query_item or not. If the
response is empty, use the default.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
enabled_by_default: boolean for default behavior.
question: optional string for how to ask for user input.
yes_reply: optional string for reply when feature is enabled.
no_reply: optional string for reply when feature is disabled.
Returns:
boolean value of the variable.
Raises:
UserInputError: if an environment variable is set, but it cannot be
interpreted as a boolean indicator, assume that the user has made a
scripting error, and will continue to provide invalid input.
Raise the error to avoid infinitely looping.
"""
if not question:
question = 'Do you wish to build TensorFlow with {} support?'.format(
query_item)
if not yes_reply:
yes_reply = '{} support will be enabled for TensorFlow.'.format(query_item)
if not no_reply:
no_reply = 'No {}'.format(yes_reply)
yes_reply += '\n'
no_reply += '\n'
if enabled_by_default:
question += ' [Y/n]: '
else:
question += ' [y/N]: '
var = environ_cp.get(var_name)
if var is not None:
var_content = var.strip().lower()
true_strings = ('1', 't', 'true', 'y', 'yes')
false_strings = ('0', 'f', 'false', 'n', 'no')
if var_content in true_strings:
var = True
elif var_content in false_strings:
var = False
else:
raise UserInputError(
'Environment variable %s must be set as a boolean indicator.\n'
'The following are accepted as TRUE : %s.\n'
'The following are accepted as FALSE: %s.\n'
'Current value is %s.' %
(var_name, ', '.join(true_strings), ', '.join(false_strings), var))
while var is None:
user_input_origin = get_input(question)
user_input = user_input_origin.strip().lower()
if user_input == 'y':
print(yes_reply)
var = True
elif user_input == 'n':
print(no_reply)
var = False
elif not user_input:
if enabled_by_default:
print(yes_reply)
var = True
else:
print(no_reply)
var = False
else:
print('Invalid selection: {}'.format(user_input_origin))
return var
def set_action_env_var(environ_cp,
var_name,
query_item,
enabled_by_default,
question=None,
yes_reply=None,
no_reply=None,
bazel_config_name=None):
"""Set boolean action_env variable.
Ask user if query_item will be enabled. Default is used if no input is given.
Set environment variable and write to .bazelrc.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
query_item: string for feature related to the variable, e.g. "CUDA for
Nvidia GPUs".
enabled_by_default: boolean for default behavior.
question: optional string for how to ask for user input.
yes_reply: optional string for reply when feature is enabled.
no_reply: optional string for reply when feature is disabled.
bazel_config_name: adding config to .bazelrc instead of action_env.
"""
var = int(
get_var(environ_cp, var_name, query_item, enabled_by_default, question,
yes_reply, no_reply))
if not bazel_config_name:
write_action_env_to_bazelrc(var_name, var)
elif var:
write_to_bazelrc('build --config=%s' % bazel_config_name)
environ_cp[var_name] = str(var)
def convert_version_to_int(version):
"""Convert a version number to a integer that can be used to compare.
Version strings of the form X.YZ and X.Y.Z-xxxxx are supported. The
'xxxxx' part, for instance 'homebrew' on OS/X, is ignored.
Args:
version: a version to be converted
Returns:
An integer if converted successfully, otherwise return None.
"""
version = version.split('-')[0]
version_segments = version.split('.')
# Treat "0.24" as "0.24.0"
if len(version_segments) == 2:
version_segments.append('0')
for seg in version_segments:
if not seg.isdigit():
return None
version_str = ''.join(['%03d' % int(seg) for seg in version_segments])
return int(version_str)
def retrieve_bazel_version():
"""Retrieve installed bazel version (or bazelisk).
Returns:
The bazel version detected.
"""
bazel_executable = shutil.which('bazel')
if bazel_executable is None:
bazel_executable = shutil.which('bazelisk')
if bazel_executable is None:
print('Cannot find bazel. Please install bazel/bazelisk.')
sys.exit(1)
stderr = open(os.devnull, 'wb')
curr_version = run_shell([bazel_executable, '--version'],
allow_non_zero=True,
stderr=stderr)
if curr_version.startswith('bazel '):
curr_version = curr_version.split('bazel ')[1]
curr_version_int = convert_version_to_int(curr_version)
# Check if current bazel version can be detected properly.
if not curr_version_int:
print('WARNING: current bazel installation is not a release version.')
return curr_version
print('You have bazel %s installed.' % curr_version)
return curr_version
def set_cc_opt_flags(environ_cp):
"""Set up architecture-dependent optimization flags.
Also append CC optimization flags to bazel.rc..
Args:
environ_cp: copy of the os.environ.
"""
if is_ppc64le():
# gcc on ppc64le does not support -march, use mcpu instead
default_cc_opt_flags = '-mcpu=native'
elif is_windows():
default_cc_opt_flags = '/arch:AVX'
else:
# On all other platforms, no longer use `-march=native` as this can result
# in instructions that are too modern being generated. Users that want
# maximum performance should compile TF in their environment and can pass
# `-march=native` there.
# See https://github.com/tensorflow/tensorflow/issues/45744 and duplicates
default_cc_opt_flags = '-Wno-sign-compare'
question = ('Please specify optimization flags to use during compilation when'
' bazel option "--config=opt" is specified [Default is %s]: '
) % default_cc_opt_flags
cc_opt_flags = get_from_env_or_user_or_default(environ_cp, 'CC_OPT_FLAGS',
question, default_cc_opt_flags)
for opt in cc_opt_flags.split():
write_to_bazelrc('build:opt --copt=%s' % opt)
write_to_bazelrc('build:opt --host_copt=%s' % opt)
def set_tf_cuda_clang(environ_cp):
"""set TF_CUDA_CLANG action_env.
Args:
environ_cp: copy of the os.environ.
"""
question = 'Do you want to use clang as CUDA compiler?'
yes_reply = 'Clang will be used as CUDA compiler.'
no_reply = 'nvcc will be used as CUDA compiler.'
set_action_env_var(
environ_cp,
'TF_CUDA_CLANG',
None,
True,
question=question,
yes_reply=yes_reply,
no_reply=no_reply,
bazel_config_name='cuda_clang',
)
def set_tf_download_clang(environ_cp):
"""Set TF_DOWNLOAD_CLANG action_env."""
question = 'Do you wish to download a fresh release of clang? (Experimental)'
yes_reply = 'Clang will be downloaded and used to compile tensorflow.'
no_reply = 'Clang will not be downloaded.'
set_action_env_var(
environ_cp,
'TF_DOWNLOAD_CLANG',
None,
False,
question=question,
yes_reply=yes_reply,
no_reply=no_reply,
bazel_config_name='download_clang')
def get_from_env_or_user_or_default(environ_cp, var_name, ask_for_var,
var_default):
"""Get var_name either from env, or user or default.
If var_name has been set as environment variable, use the preset value, else
ask for user input. If no input is provided, the default is used.
Args:
environ_cp: copy of the os.environ.
var_name: string for name of environment variable, e.g. "TF_NEED_CUDA".
ask_for_var: string for how to ask for user input.
var_default: default value string.
Returns:
string value for var_name
"""
var = environ_cp.get(var_name)
# an intentionally empty value in the
# environment is not the same as no value
if var is None:
var = get_input(ask_for_var)
print('\n')
if not var:
var = var_default
return var
def prompt_loop_or_load_from_env(environ_cp,
var_name,
var_default,
ask_for_var,
check_success,
error_msg,
suppress_default_error=False,
resolve_symlinks=False,
n_ask_attempts=_DEFAULT_PROMPT_ASK_ATTEMPTS):
"""Loop over user prompts for an ENV param until receiving a valid response.
For the env param var_name, read from the environment or verify user input
until receiving valid input. When done, set var_name in the environ_cp to its
new value.
Args:
environ_cp: (Dict) copy of the os.environ.
var_name: (String) string for name of environment variable, e.g. "TF_MYVAR".
var_default: (String) default value string.
ask_for_var: (String) string for how to ask for user input.
check_success: (Function) function that takes one argument and returns a
boolean. Should return True if the value provided is considered valid. May
contain a complex error message if error_msg does not provide enough
information. In that case, set suppress_default_error to True.
error_msg: (String) String with one and only one '%s'. Formatted with each
invalid response upon check_success(input) failure.
suppress_default_error: (Bool) Suppress the above error message in favor of
one from the check_success function.
resolve_symlinks: (Bool) Translate symbolic links into the real filepath.
n_ask_attempts: (Integer) Number of times to query for valid input before
raising an error and quitting.
Returns:
[String] The value of var_name after querying for input.
Raises:
UserInputError: if a query has been attempted n_ask_attempts times without
success, assume that the user has made a scripting error, and will
continue to provide invalid input. Raise the error to avoid infinitely
looping.
"""
default = environ_cp.get(var_name) or var_default
full_query = '%s [Default is %s]: ' % (
ask_for_var,
default,
)
for _ in range(n_ask_attempts):
val = get_from_env_or_user_or_default(environ_cp, var_name, full_query,
default)
if check_success(val):
break
if not suppress_default_error:
print(error_msg % val)
environ_cp[var_name] = ''
else:
raise UserInputError('Invalid %s setting was provided %d times in a row. '
'Assuming to be a scripting mistake.' %
(var_name, n_ask_attempts))
if resolve_symlinks:
val = os.path.realpath(val)
environ_cp[var_name] = val
return val
def set_clang_cuda_compiler_path(environ_cp):
"""Set CLANG_CUDA_COMPILER_PATH."""
# Upon clang 19 drop the check for 16
default_clang_path = '/usr/lib/llvm-18/bin/clang'
if not os.path.exists(default_clang_path):
default_clang_path = '/usr/lib/llvm-17/bin/clang'
if not os.path.exists(default_clang_path):
default_clang_path = '/usr/lib/llvm-16/bin/clang'
if not os.path.exists(default_clang_path):
default_clang_path = shutil.which('clang') or ''
clang_cuda_compiler_path = prompt_loop_or_load_from_env(
environ_cp,
var_name='CLANG_CUDA_COMPILER_PATH',
var_default=default_clang_path,
ask_for_var='Please specify clang path that to be used as host compiler.',
check_success=os.path.exists,
resolve_symlinks=True,
error_msg='Invalid clang path. %s cannot be found.',
)
# Set CLANG_CUDA_COMPILER_PATH
environ_cp['CLANG_CUDA_COMPILER_PATH'] = clang_cuda_compiler_path
write_action_env_to_bazelrc('CLANG_CUDA_COMPILER_PATH',
clang_cuda_compiler_path)
return clang_cuda_compiler_path
def create_android_ndk_rule(environ_cp):
"""Set ANDROID_NDK_HOME and write Android NDK WORKSPACE rule."""
if is_windows() or is_cygwin():
default_ndk_path = cygpath('%s/Android/Sdk/ndk-bundle' %
environ_cp['APPDATA'])
elif is_macos():
default_ndk_path = '%s/library/Android/Sdk/ndk-bundle' % environ_cp['HOME']
else:
default_ndk_path = '%s/Android/Sdk/ndk-bundle' % environ_cp['HOME']
def valid_ndk_path(path):
return (os.path.exists(path) and
os.path.exists(os.path.join(path, 'source.properties')))
android_ndk_home_path = prompt_loop_or_load_from_env(
environ_cp,
var_name='ANDROID_NDK_HOME',
var_default=default_ndk_path,
ask_for_var='Please specify the home path of the Android NDK to use.',
check_success=valid_ndk_path,
error_msg=('The path %s or its child file "source.properties" '
'does not exist.'))
write_action_env_to_bazelrc('ANDROID_NDK_HOME', android_ndk_home_path)
write_action_env_to_bazelrc(
'ANDROID_NDK_API_LEVEL',
get_ndk_api_level(environ_cp, android_ndk_home_path))
def create_android_sdk_rule(environ_cp):
"""Set Android variables and write Android SDK WORKSPACE rule."""
if is_windows() or is_cygwin():
default_sdk_path = cygpath('%s/Android/Sdk' % environ_cp['APPDATA'])
elif is_macos():
default_sdk_path = '%s/library/Android/Sdk' % environ_cp['HOME']
else:
default_sdk_path = '%s/Android/Sdk' % environ_cp['HOME']
def valid_sdk_path(path):
return (os.path.exists(path) and
os.path.exists(os.path.join(path, 'platforms')) and
os.path.exists(os.path.join(path, 'build-tools')))
android_sdk_home_path = prompt_loop_or_load_from_env(
environ_cp,
var_name='ANDROID_SDK_HOME',
var_default=default_sdk_path,
ask_for_var='Please specify the home path of the Android SDK to use.',
check_success=valid_sdk_path,
error_msg=('Either %s does not exist, or it does not contain the '
'subdirectories "platforms" and "build-tools".'))
platforms = os.path.join(android_sdk_home_path, 'platforms')
api_levels = sorted(os.listdir(platforms))
api_levels = [x.replace('android-', '') for x in api_levels]
def valid_api_level(api_level):
return os.path.exists(
os.path.join(android_sdk_home_path, 'platforms',
'android-' + api_level))
android_api_level = prompt_loop_or_load_from_env(
environ_cp,
var_name='ANDROID_API_LEVEL',
var_default=api_levels[-1],
ask_for_var=('Please specify the Android SDK API level to use. '
'[Available levels: %s]') % api_levels,
check_success=valid_api_level,
error_msg='Android-%s is not present in the SDK path.')
build_tools = os.path.join(android_sdk_home_path, 'build-tools')
versions = sorted(os.listdir(build_tools))
def valid_build_tools(version):
return os.path.exists(
os.path.join(android_sdk_home_path, 'build-tools', version))
android_build_tools_version = prompt_loop_or_load_from_env(
environ_cp,
var_name='ANDROID_BUILD_TOOLS_VERSION',
var_default=versions[-1],
ask_for_var=('Please specify an Android build tools version to use. '
'[Available versions: %s]') % versions,
check_success=valid_build_tools,
error_msg=('The selected SDK does not have build-tools version %s '
'available.'))
write_action_env_to_bazelrc('ANDROID_BUILD_TOOLS_VERSION',
android_build_tools_version)
write_action_env_to_bazelrc('ANDROID_SDK_API_LEVEL', android_api_level)
write_action_env_to_bazelrc('ANDROID_SDK_HOME', android_sdk_home_path)
def get_ndk_api_level(environ_cp, android_ndk_home_path):
"""Gets the appropriate NDK API level to use for the provided Android NDK path.
"""
# First check to see if we're using a blessed version of the NDK.
properties_path = '%s/source.properties' % android_ndk_home_path
if is_windows() or is_cygwin():
properties_path = cygpath(properties_path)
with open(properties_path, 'r') as f:
filedata = f.read()
revision = re.search(r'Pkg.Revision = (\d+)', filedata)
if revision:
ndk_version = revision.group(1)
else:
raise Exception('Unable to parse NDK revision.')
if int(ndk_version) not in _SUPPORTED_ANDROID_NDK_VERSIONS:
print('WARNING: The NDK version in %s is %s, which is not '
'supported by Bazel (officially supported versions: %s). Please use '
'another version. Compiling Android targets may result in confusing '
'errors.\n' %
(android_ndk_home_path, ndk_version, _SUPPORTED_ANDROID_NDK_VERSIONS))
write_action_env_to_bazelrc('ANDROID_NDK_VERSION', ndk_version)
# Now grab the NDK API level to use. Note that this is different from the
# SDK API level, as the NDK API level is effectively the *min* target SDK
# version.
meta = open(os.path.join(android_ndk_home_path, 'meta/platforms.json'))
platforms = json.load(meta)
meta.close()
aliases = platforms['aliases']
api_levels = sorted(list(set([aliases[i] for i in aliases])))
android_ndk_api_level = prompt_loop_or_load_from_env(
environ_cp,
var_name='ANDROID_NDK_API_LEVEL',
var_default='21', # 21 is required for ARM64 support.
ask_for_var=(
'Please specify the (min) Android NDK API level to use. '
'[Available levels: %s]'
)
% api_levels,
check_success=(lambda *_: True),
error_msg='Android-%s is not present in the NDK path.',
)
return android_ndk_api_level
def set_gcc_host_compiler_path(environ_cp):
"""Set GCC_HOST_COMPILER_PATH."""
default_gcc_host_compiler_path = shutil.which('gcc') or ''
gcc_host_compiler_path = prompt_loop_or_load_from_env(
environ_cp,
var_name='GCC_HOST_COMPILER_PATH',
var_default=default_gcc_host_compiler_path,
ask_for_var='Please specify which gcc should be used by nvcc as the host '
'compiler.',
check_success=os.path.exists,
resolve_symlinks=True,
error_msg='Invalid gcc path. %s cannot be found.',
)
write_action_env_to_bazelrc('GCC_HOST_COMPILER_PATH', gcc_host_compiler_path)
def choose_compiler(environ_cp):
question = 'Do you want to use Clang to build TensorFlow?'
yes_reply = 'Clang will be used to compile TensorFlow.'
no_reply = 'GCC will be used to compile TensorFlow.'
var = int(
get_var(
environ_cp, 'TF_NEED_CLANG', None, True, question, yes_reply, no_reply
)
)
return var
def choose_compiler_Win(environ_cp):
question = 'Do you want to use Clang to build TensorFlow?'
yes_reply = 'Add "--config=win_clang" to compile TensorFlow with CLANG.'
no_reply = 'MSVC will be used to compile TensorFlow.'
var = int(
get_var(
environ_cp, 'TF_NEED_CLANG', None, True, question, yes_reply, no_reply
)
)
return var
def set_clang_compiler_path(environ_cp):
"""Set CLANG_COMPILER_PATH and environment variables.
Loop over user prompts for clang path until receiving a valid response.
Default is used if no input is given. Set CLANG_COMPILER_PATH and write
environment variables CC and BAZEL_COMPILER to .bazelrc.
Args:
environ_cp: (Dict) copy of the os.environ.
Returns:
string value for clang_compiler_path.
"""
# Default path if clang-18 is installed by using apt-get install
# remove 16 logic upon release of 19
default_clang_path = '/usr/lib/llvm-18/bin/clang'
if not os.path.exists(default_clang_path):
default_clang_path = '/usr/lib/llvm-17/bin/clang'
if not os.path.exists(default_clang_path):
default_clang_path = '/usr/lib/llvm-16/bin/clang'
if not os.path.exists(default_clang_path):
default_clang_path = shutil.which('clang') or ''
clang_compiler_path = prompt_loop_or_load_from_env(
environ_cp,
var_name='CLANG_COMPILER_PATH',
var_default=default_clang_path,
ask_for_var='Please specify the path to clang executable.',
check_success=os.path.exists,
resolve_symlinks=True,
error_msg=(
'Invalid clang path. %s cannot be found. Note that TensorFlow now'
' requires clang to compile. You may override this behavior by'
' setting TF_NEED_CLANG=0'
),
)
write_action_env_to_bazelrc('CLANG_COMPILER_PATH', clang_compiler_path)
write_to_bazelrc('build --repo_env=CC=%s' % clang_compiler_path)
write_to_bazelrc('build --repo_env=BAZEL_COMPILER=%s' % clang_compiler_path)
return clang_compiler_path
def set_clang_compiler_path_win(environ_cp):
"""Set CLANG_COMPILER_PATH and environment variables.
Loop over user prompts for clang path until receiving a valid response.
Default is used if no input is given. Set CLANG_COMPILER_PATH and write
environment variables CC and BAZEL_COMPILER to .bazelrc.
Args:
environ_cp: (Dict) copy of the os.environ.
Returns:
string value for clang_compiler_path.
"""
# Default path if clang-16 is installed by using apt-get install
default_clang_path = 'C:/Program Files/LLVM/bin/clang.exe'
if not os.path.exists(default_clang_path):
default_clang_path = shutil.which('clang') or ''
clang_compiler_path = prompt_loop_or_load_from_env(
environ_cp,
var_name='CLANG_COMPILER_PATH',
var_default=default_clang_path,
ask_for_var='Please specify the path to clang executable.',
check_success=os.path.exists,
resolve_symlinks=True,
error_msg=(
'Invalid clang path. %s cannot be found. Note that Clang is now'
'preferred compiler. You may use MSVC by removing --config=win_clang'
),
)
write_action_env_to_bazelrc('CLANG_COMPILER_PATH', clang_compiler_path)
write_to_bazelrc(f'build --repo_env=CC="{clang_compiler_path}"')
write_to_bazelrc(f'build --repo_env=BAZEL_COMPILER="{clang_compiler_path}"')
return clang_compiler_path
def retrieve_clang_version(clang_executable):
"""Retrieve installed clang version.
Args:
clang_executable: (String) path to clang executable
Returns:
The clang version detected.
"""
stderr = open(os.devnull, 'wb')
curr_version = run_shell([clang_executable, '--version'],
allow_non_zero=True,
stderr=stderr)
curr_version_split = curr_version.lower().split('clang version ')
if len(curr_version_split) > 1:
curr_version = curr_version_split[1].split()[0].split('git')
if len(curr_version) > 1:
print('WARNING: current clang installation is not a release version.\n')
curr_version = curr_version[0]
curr_version_int = convert_version_to_int(curr_version)
# Check if current clang version can be detected properly.
if not curr_version_int:
print('WARNING: current clang installation version unknown.\n')
return None
print('You have Clang %s installed.\n' % curr_version)
return curr_version
# Disable clang extension that rejects type definitions within offsetof.
# This was added in clang-16 by https://reviews.llvm.org/D133574.
# Still required for clang-17.
# Can be removed once upb is updated, since a type definition is used within
# offset of in the current version of ubp. See
# https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183.
def disable_clang_offsetof_extension(clang_version):
if int(clang_version.split('.')[0]) in (16, 17):
write_to_bazelrc('build --copt=-Wno-gnu-offsetof-extensions')
def set_hermetic_cuda_version(environ_cp):
"""Set HERMETIC_CUDA_VERSION."""
ask_cuda_version = (
'Please specify the hermetic CUDA version you want to use '
'or leave empty to use the default version. '
)
hermetic_cuda_version = get_from_env_or_user_or_default(
environ_cp, 'HERMETIC_CUDA_VERSION', ask_cuda_version, None
)
if hermetic_cuda_version:
environ_cp['HERMETIC_CUDA_VERSION'] = hermetic_cuda_version
write_repo_env_to_bazelrc(
'cuda', 'HERMETIC_CUDA_VERSION', hermetic_cuda_version
)
def set_hermetic_cudnn_version(environ_cp):
"""Set HERMETIC_CUDNN_VERSION."""
ask_cudnn_version = (
'Please specify the hermetic cuDNN version you want to use '
'or leave empty to use the default version. '
)
hermetic_cudnn_version = get_from_env_or_user_or_default(
environ_cp, 'HERMETIC_CUDNN_VERSION', ask_cudnn_version, None
)
if hermetic_cudnn_version:
environ_cp['HERMETIC_CUDNN_VERSION'] = hermetic_cudnn_version
write_repo_env_to_bazelrc(
'cuda', 'HERMETIC_CUDNN_VERSION', hermetic_cudnn_version
)
def set_hermetic_cuda_compute_capabilities(environ_cp):
"""Set HERMETIC_CUDA_COMPUTE_CAPABILITIES."""
while True:
default_cuda_compute_capabilities = _DEFAULT_CUDA_COMPUTE_CAPABILITIES
ask_cuda_compute_capabilities = (
'Please specify a list of comma-separated CUDA compute capabilities '
'you want to build with.\nYou can find the compute capability of your '
'device at: https://developer.nvidia.com/cuda-gpus. Each capability '
'can be specified as "x.y" or "compute_xy" to include both virtual and'
' binary GPU code, or as "sm_xy" to only include the binary '
'code.\nPlease note that each additional compute capability '
'significantly increases your build time and binary size, and that '
'TensorFlow only supports compute capabilities >= 3.5 [Default is: '
'%s]: ' % default_cuda_compute_capabilities)
hermetic_cuda_compute_capabilities = get_from_env_or_user_or_default(
environ_cp,
'HERMETIC_CUDA_COMPUTE_CAPABILITIES',
ask_cuda_compute_capabilities,
default_cuda_compute_capabilities,
)
# Check whether all capabilities from the input is valid
all_valid = True
# Remove all whitespace characters before splitting the string
# that users may insert by accident, as this will result in error
hermetic_cuda_compute_capabilities = ''.join(
hermetic_cuda_compute_capabilities.split()
)
for compute_capability in hermetic_cuda_compute_capabilities.split(','):
m = re.match('[0-9]+.[0-9]+', compute_capability)
if not m:
# We now support sm_35,sm_50,sm_60,compute_70.
sm_compute_match = re.match('(sm|compute)_?([0-9]+[0-9]+)',
compute_capability)
if not sm_compute_match:
print('Invalid compute capability: %s' % compute_capability)
all_valid = False
else:
ver = int(sm_compute_match.group(2))
if ver < 30:
print(
'ERROR: TensorFlow only supports small CUDA compute'
' capabilities of sm_30 and higher. Please re-specify the list'
' of compute capabilities excluding version %s.' % ver)
all_valid = False
if ver < 35:
print('WARNING: XLA does not support CUDA compute capabilities '
'lower than sm_35. Disable XLA when running on older GPUs.')
else:
ver = float(m.group(0))
if ver < 3.0:
print('ERROR: TensorFlow only supports CUDA compute capabilities 3.0 '
'and higher. Please re-specify the list of compute '
'capabilities excluding version %s.' % ver)
all_valid = False
if ver < 3.5:
print('WARNING: XLA does not support CUDA compute capabilities '
'lower than 3.5. Disable XLA when running on older GPUs.')
if all_valid:
break
# Reset and Retry
environ_cp['HERMETIC_CUDA_COMPUTE_CAPABILITIES'] = ''
# Set HERMETIC_CUDA_COMPUTE_CAPABILITIES
environ_cp['HERMETIC_CUDA_COMPUTE_CAPABILITIES'] = (
hermetic_cuda_compute_capabilities
)
write_repo_env_to_bazelrc(
'cuda',
'HERMETIC_CUDA_COMPUTE_CAPABILITIES',
hermetic_cuda_compute_capabilities,
)
def set_cuda_local_path(environ_cp, dist_name, env_var):
ask_path = (
'Please specify the local {} path you want to use '
'or leave empty to use the default version. '
).format(dist_name)
local_path = get_from_env_or_user_or_default(
environ_cp, env_var, ask_path, None
)
if local_path:
environ_cp[env_var] = local_path
write_repo_env_to_bazelrc('cuda', env_var, local_path)
def set_other_cuda_vars(environ_cp):
"""Set other CUDA related variables."""
# If CUDA is enabled, always use GPU during build and test.
if environ_cp.get('TF_CUDA_CLANG') == '1':
write_to_bazelrc('build --config=cuda_clang')
else:
write_to_bazelrc('build --config=cuda')
def system_specific_test_config(environ_cp):
"""Add default build and test flags required for TF tests to bazelrc."""
write_to_bazelrc('test --test_size_filters=small,medium')
# Each instance of --test_tag_filters or --build_tag_filters overrides all
# previous instances, so we need to build up a complete list and write a
# single list of filters for the .bazelrc file.
# Filters to use with both --test_tag_filters and --build_tag_filters
test_and_build_filters = ['-benchmark-test', '-no_oss', '-oss_excluded']
# Additional filters for --test_tag_filters beyond those in
# test_and_build_filters
test_only_filters = ['-oss_serial']
if is_windows():
test_and_build_filters += ['-no_windows', '-windows_excluded']
if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or
(environ_cp.get('TF_NEED_ROCM', None) == '1')):
test_and_build_filters += ['-no_windows_gpu', '-no_gpu']
else:
test_and_build_filters.append('-gpu')
elif is_macos():
test_and_build_filters += ['-gpu', '-nomac', '-no_mac', '-mac_excluded']
elif is_linux():
if ((environ_cp.get('TF_NEED_CUDA', None) == '1') or
(environ_cp.get('TF_NEED_ROCM', None) == '1')):
test_and_build_filters.append('-no_gpu')
write_to_bazelrc('test --test_env=LD_LIBRARY_PATH')
else:
test_and_build_filters.append('-gpu')
# Disable tests with "v1only" tag in "v2" Bazel config, but not in "v1" config
write_to_bazelrc('test:v1 --test_tag_filters=%s' %
','.join(test_and_build_filters + test_only_filters))
write_to_bazelrc('test:v1 --build_tag_filters=%s' %
','.join(test_and_build_filters))
write_to_bazelrc(
'test:v2 --test_tag_filters=%s' %
','.join(test_and_build_filters + test_only_filters + ['-v1only']))
write_to_bazelrc('test:v2 --build_tag_filters=%s' %
','.join(test_and_build_filters + ['-v1only']))
def set_system_libs_flag(environ_cp):
"""Set system libs flags."""
syslibs = environ_cp.get('TF_SYSTEM_LIBS', '')
if is_s390x() and 'boringssl' not in syslibs:
syslibs = 'boringssl' + (', ' + syslibs if syslibs else '')
if syslibs:
if ',' in syslibs:
syslibs = ','.join(sorted(syslibs.split(',')))
else:
syslibs = ','.join(sorted(syslibs.split()))
write_action_env_to_bazelrc('TF_SYSTEM_LIBS', syslibs)
for varname in ('PREFIX', 'PROTOBUF_INCLUDE_PATH'):
if varname in environ_cp:
write_to_bazelrc('build --define=%s=%s' % (varname, environ_cp[varname]))
def set_windows_build_flags(environ_cp):
"""Set Windows specific build options."""
# First available in VS 16.4. Speeds up Windows compile times by a lot. See
# https://groups.google.com/a/tensorflow.org/d/topic/build/SsW98Eo7l3o/discussion
# pylint: disable=line-too-long
write_to_bazelrc(
'build --copt=/d2ReducedOptimizeHugeFunctions --host_copt=/d2ReducedOptimizeHugeFunctions'
)
if get_var(
environ_cp, 'TF_OVERRIDE_EIGEN_STRONG_INLINE', 'Eigen strong inline',
True, ('Would you like to override eigen strong inline for some C++ '
'compilation to reduce the compilation time?'),
'Eigen strong inline overridden.', 'Not overriding eigen strong inline, '
'some compilations could take more than 20 mins.'):
# Due to a known MSVC compiler issue
# https://github.com/tensorflow/tensorflow/issues/10521
# Overriding eigen strong inline speeds up the compiling of
# conv_grad_ops_3d.cc and conv_ops_3d.cc by 20 minutes,
# but this also hurts the performance. Let users decide what they want.
write_to_bazelrc('build --define=override_eigen_strong_inline=true')
def config_info_line(name, help_text):
"""Helper function to print formatted help text for Bazel config options."""
print('\t--config=%-12s\t# %s' % (name, help_text))
def configure_ios(environ_cp):
"""Configures TensorFlow for iOS builds."""
if not is_macos():
return
if not get_var(environ_cp, 'TF_CONFIGURE_IOS', 'iOS', False):
return
for filepath in APPLE_BAZEL_FILES:
existing_filepath = os.path.join(_TF_WORKSPACE_ROOT, filepath + '.apple')
renamed_filepath = os.path.join(_TF_WORKSPACE_ROOT, filepath)
symlink_force(existing_filepath, renamed_filepath)
for filepath in IOS_FILES:
filename = os.path.basename(filepath)
new_filepath = os.path.join(_TF_WORKSPACE_ROOT, filename)
symlink_force(filepath, new_filepath)
def get_gcc_compiler(environ_cp):
gcc_env = environ_cp.get('CXX') or environ_cp.get('CC') or shutil.which('gcc')
if gcc_env is not None:
gcc_version = run_shell([gcc_env, '--version']).split()
if gcc_version[0] in ('gcc', 'g++'):
return gcc_env
return None
def main():
global _TF_WORKSPACE_ROOT
global _TF_BAZELRC
global _TF_CURRENT_BAZEL_VERSION
parser = argparse.ArgumentParser()
parser.add_argument(
'--workspace',
type=str,
default=os.path.abspath(os.path.dirname(__file__)),
help='The absolute path to your active Bazel workspace.')
args = parser.parse_args()
_TF_WORKSPACE_ROOT = args.workspace
_TF_BAZELRC = os.path.join(_TF_WORKSPACE_ROOT, _TF_BAZELRC_FILENAME)
# Make a copy of os.environ to be clear when functions and getting and setting
# environment variables.
environ_cp = dict(os.environ)
try:
current_bazel_version = retrieve_bazel_version()
except subprocess.CalledProcessError as e:
print('Error retrieving bazel version: ', e.output.decode('UTF-8').strip())
raise e
_TF_CURRENT_BAZEL_VERSION = convert_version_to_int(current_bazel_version)
reset_tf_configure_bazelrc()
cleanup_makefile()
setup_python(environ_cp)
if is_windows():
environ_cp['TF_NEED_OPENCL'] = '0'
environ_cp['TF_CUDA_CLANG'] = '0'
# TODO(ibiryukov): Investigate using clang as a cpu or cuda compiler on
# Windows.
environ_cp['TF_DOWNLOAD_CLANG'] = '0'
environ_cp['TF_NEED_MPI'] = '0'
if is_ppc64le():
# Enable MMA Dynamic Dispatch support if 'gcc' and if linker >= 2.35
gcc_env = get_gcc_compiler(environ_cp)
if gcc_env is not None:
# Use gold linker if 'gcc' and if 'ppc64le'
write_to_bazelrc('build --linkopt="-fuse-ld=gold"')
# Get the linker version
ld_version = run_shell([gcc_env, '-Wl,-version']).split()
ld_version_int = 0
for i in range(len(ld_version)):
ld_version_int = convert_version_to_int(ld_version[i])
if ld_version_int is not None:
break
if ld_version_int is None:
ld_version_int = 0
# Enable if 'ld' version >= 2.35
if ld_version_int >= 2035000:
write_to_bazelrc(
'build --copt="-DEIGEN_ALTIVEC_ENABLE_MMA_DYNAMIC_DISPATCH=1"')
with_xla_support = environ_cp.get('TF_ENABLE_XLA', None)
if with_xla_support is not None:
write_to_bazelrc('build --define=with_xla_support=%s' %
('true' if int(with_xla_support) else 'false'))
set_action_env_var(
environ_cp, 'TF_NEED_ROCM', 'ROCm', False, bazel_config_name='rocm')
if (environ_cp.get('TF_NEED_ROCM') == '1' and
'LD_LIBRARY_PATH' in environ_cp and
environ_cp.get('LD_LIBRARY_PATH') != '1'):
write_action_env_to_bazelrc('LD_LIBRARY_PATH',
environ_cp.get('LD_LIBRARY_PATH'))
if (environ_cp.get('TF_NEED_ROCM') == '1' and environ_cp.get('ROCM_PATH')):
write_action_env_to_bazelrc('ROCM_PATH', environ_cp.get('ROCM_PATH'))
if (environ_cp.get('TF_NEED_ROCM') == '1' and environ_cp.get('HIP_PLATFORM')):
write_action_env_to_bazelrc('HIP_PLATFORM', environ_cp.get('HIP_PLATFORM'))
if is_windows():
print('\nWARNING: Cannot build with CUDA support on Windows.\n'
'Starting in TF 2.11, CUDA build is not supported for Windows. '
'For using TensorFlow GPU on Windows, you will need to build/install '
'TensorFlow in WSL2.\n')
environ_cp['TF_NEED_CUDA'] = '0'
else:
environ_cp['TF_NEED_CUDA'] = str(
int(get_var(environ_cp, 'TF_NEED_CUDA', 'CUDA', False)))
if environ_cp.get('TF_NEED_CUDA') == '1':
set_hermetic_cuda_version(environ_cp)
set_hermetic_cudnn_version(environ_cp)
set_hermetic_cuda_compute_capabilities(environ_cp)
set_cuda_local_path(environ_cp, 'CUDA', 'LOCAL_CUDA_PATH')
set_cuda_local_path(environ_cp, 'CUDNN', 'LOCAL_CUDNN_PATH')
set_cuda_local_path(environ_cp, 'NCCL', 'LOCAL_NCCL_PATH')
if 'LD_LIBRARY_PATH' in environ_cp and environ_cp.get(
'LD_LIBRARY_PATH') != '1':
write_action_env_to_bazelrc('LD_LIBRARY_PATH',
environ_cp.get('LD_LIBRARY_PATH'))
set_tf_cuda_clang(environ_cp)
if environ_cp.get('TF_CUDA_CLANG') == '1':
# Set up which clang we should use as the cuda / host compiler.
clang_cuda_compiler_path = set_clang_cuda_compiler_path(environ_cp)
clang_version = retrieve_clang_version(clang_cuda_compiler_path)
disable_clang_offsetof_extension(clang_version)
else:
# Set up which gcc nvcc should use as the host compiler
# No need to set this on Windows
if not is_windows():
set_gcc_host_compiler_path(environ_cp)
set_other_cuda_vars(environ_cp)
else:
# CUDA not required. Ask whether we should use clang for the CPU build.
if is_linux():
environ_cp['TF_NEED_CLANG'] = str(choose_compiler(environ_cp))
if environ_cp.get('TF_NEED_CLANG') == '1':
clang_compiler_path = set_clang_compiler_path(environ_cp)
clang_version = retrieve_clang_version(clang_compiler_path)
disable_clang_offsetof_extension(clang_version)
if is_windows():
environ_cp['TF_NEED_CLANG'] = str(choose_compiler_Win(environ_cp))
if environ_cp.get('TF_NEED_CLANG') == '1':
clang_compiler_path = set_clang_compiler_path_win(environ_cp)
clang_version = retrieve_clang_version(clang_compiler_path)
disable_clang_offsetof_extension(clang_version)
# ROCm / CUDA are mutually exclusive.
# At most 1 GPU platform can be configured.
gpu_platform_count = 0
if environ_cp.get('TF_NEED_ROCM') == '1':
gpu_platform_count += 1
if environ_cp.get('TF_NEED_CUDA') == '1':
gpu_platform_count += 1
if gpu_platform_count >= 2:
raise UserInputError('CUDA / ROCm are mututally exclusive. '
'At most 1 GPU platform can be configured.')
set_cc_opt_flags(environ_cp)
set_system_libs_flag(environ_cp)
if is_windows():
set_windows_build_flags(environ_cp)
if get_var(environ_cp, 'TF_SET_ANDROID_WORKSPACE', 'android workspace', False,
('Would you like to interactively configure ./WORKSPACE for '
'Android builds?'), 'Searching for NDK and SDK installations.',
'Not configuring the WORKSPACE for Android builds.'):
create_android_ndk_rule(environ_cp)
create_android_sdk_rule(environ_cp)
system_specific_test_config(environ_cp)
configure_ios(environ_cp)
print('Preconfigured Bazel build configs. You can use any of the below by '
'adding "--config=<>" to your build command. See .bazelrc for more '
'details.')
config_info_line('mkl', 'Build with MKL support.')
config_info_line(
'mkl_aarch64',
'Build with oneDNN and Compute Library for the Arm Architecture (ACL).')
config_info_line('monolithic', 'Config for mostly static monolithic build.')
config_info_line('numa', 'Build with NUMA support.')
config_info_line(
'dynamic_kernels',
'(Experimental) Build kernels into separate shared objects.')
config_info_line('v1', 'Build with TensorFlow 1 API instead of TF 2 API.')
print('Preconfigured Bazel build configs to DISABLE default on features:')
config_info_line('nogcp', 'Disable GCP support.')
config_info_line('nonccl', 'Disable NVIDIA NCCL support.')
if __name__ == '__main__':
main()
|
https://github.com/pytorch/pytorch
|
README.md
|

--------------------------------------------------------------------------------
PyTorch is a Python package that provides two high-level features:
- Tensor computation (like NumPy) with strong GPU acceleration
- Deep neural networks built on a tape-based autograd system
You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.
Our trunk health (Continuous Integration signals) can be found at [hud.pytorch.org](https://hud.pytorch.org/ci/pytorch/pytorch/main).
<!-- toc -->
- [More About PyTorch](#more-about-pytorch)
- [A GPU-Ready Tensor Library](#a-gpu-ready-tensor-library)
- [Dynamic Neural Networks: Tape-Based Autograd](#dynamic-neural-networks-tape-based-autograd)
- [Python First](#python-first)
- [Imperative Experiences](#imperative-experiences)
- [Fast and Lean](#fast-and-lean)
- [Extensions Without Pain](#extensions-without-pain)
- [Installation](#installation)
- [Binaries](#binaries)
- [NVIDIA Jetson Platforms](#nvidia-jetson-platforms)
- [From Source](#from-source)
- [Prerequisites](#prerequisites)
- [NVIDIA CUDA Support](#nvidia-cuda-support)
- [AMD ROCm Support](#amd-rocm-support)
- [Intel GPU Support](#intel-gpu-support)
- [Get the PyTorch Source](#get-the-pytorch-source)
- [Install Dependencies](#install-dependencies)
- [Install PyTorch](#install-pytorch)
- [Adjust Build Options (Optional)](#adjust-build-options-optional)
- [Docker Image](#docker-image)
- [Using pre-built images](#using-pre-built-images)
- [Building the image yourself](#building-the-image-yourself)
- [Building the Documentation](#building-the-documentation)
- [Building a PDF](#building-a-pdf)
- [Previous Versions](#previous-versions)
- [Getting Started](#getting-started)
- [Resources](#resources)
- [Communication](#communication)
- [Releases and Contributing](#releases-and-contributing)
- [The Team](#the-team)
- [License](#license)
<!-- tocstop -->
## More About PyTorch
[Learn the basics of PyTorch](https://pytorch.org/tutorials/beginner/basics/intro.html)
At a granular level, PyTorch is a library that consists of the following components:
| Component | Description |
| ---- | --- |
| [**torch**](https://pytorch.org/docs/stable/torch.html) | A Tensor library like NumPy, with strong GPU support |
| [**torch.autograd**](https://pytorch.org/docs/stable/autograd.html) | A tape-based automatic differentiation library that supports all differentiable Tensor operations in torch |
| [**torch.jit**](https://pytorch.org/docs/stable/jit.html) | A compilation stack (TorchScript) to create serializable and optimizable models from PyTorch code |
| [**torch.nn**](https://pytorch.org/docs/stable/nn.html) | A neural networks library deeply integrated with autograd designed for maximum flexibility |
| [**torch.multiprocessing**](https://pytorch.org/docs/stable/multiprocessing.html) | Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training |
| [**torch.utils**](https://pytorch.org/docs/stable/data.html) | DataLoader and other utility functions for convenience |
Usually, PyTorch is used either as:
- A replacement for NumPy to use the power of GPUs.
- A deep learning research platform that provides maximum flexibility and speed.
Elaborating Further:
### A GPU-Ready Tensor Library
If you use NumPy, then you have used Tensors (a.k.a. ndarray).

PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
computation by a huge amount.
We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs
such as slicing, indexing, mathematical operations, linear algebra, reductions.
And they are fast!
### Dynamic Neural Networks: Tape-Based Autograd
PyTorch has a unique way of building neural networks: using and replaying a tape recorder.
Most frameworks such as TensorFlow, Theano, Caffe, and CNTK have a static view of the world.
One has to build a neural network and reuse the same structure again and again.
Changing the way the network behaves means that one has to start from scratch.
With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to
change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes
from several research papers on this topic, as well as current and past work such as
[torch-autograd](https://github.com/twitter/torch-autograd),
[autograd](https://github.com/HIPS/autograd),
[Chainer](https://chainer.org), etc.
While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
You get the best of speed and flexibility for your crazy research.

### Python First
PyTorch is not a Python binding into a monolithic C++ framework.
It is built to be deeply integrated into Python.
You can use it naturally like you would use [NumPy](https://www.numpy.org/) / [SciPy](https://www.scipy.org/) / [scikit-learn](https://scikit-learn.org) etc.
You can write your new neural network layers in Python itself, using your favorite libraries
and use packages such as [Cython](https://cython.org/) and [Numba](http://numba.pydata.org/).
Our goal is to not reinvent the wheel where appropriate.
### Imperative Experiences
PyTorch is designed to be intuitive, linear in thought, and easy to use.
When you execute a line of code, it gets executed. There isn't an asynchronous view of the world.
When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward.
The stack trace points to exactly where your code was defined.
We hope you never spend hours debugging your code because of bad stack traces or asynchronous and opaque execution engines.
### Fast and Lean
PyTorch has minimal framework overhead. We integrate acceleration libraries
such as [Intel MKL](https://software.intel.com/mkl) and NVIDIA ([cuDNN](https://developer.nvidia.com/cudnn), [NCCL](https://developer.nvidia.com/nccl)) to maximize speed.
At the core, its CPU and GPU Tensor and neural network backends
are mature and have been tested for years.
Hence, PyTorch is quite fast — whether you run small or large neural networks.
The memory usage in PyTorch is extremely efficient compared to Torch or some of the alternatives.
We've written custom memory allocators for the GPU to make sure that
your deep learning models are maximally memory efficient.
This enables you to train bigger deep learning models than before.
### Extensions Without Pain
Writing new neural network modules, or interfacing with PyTorch's Tensor API was designed to be straightforward
and with minimal abstractions.
You can write new neural network layers in Python using the torch API
[or your favorite NumPy-based libraries such as SciPy](https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html).
If you want to write your layers in C/C++, we provide a convenient extension API that is efficient and with minimal boilerplate.
No wrapper code needs to be written. You can see [a tutorial here](https://pytorch.org/tutorials/advanced/cpp_extension.html) and [an example here](https://github.com/pytorch/extension-cpp).
## Installation
### Binaries
Commands to install binaries via Conda or pip wheels are on our website: [https://pytorch.org/get-started/locally/](https://pytorch.org/get-started/locally/)
#### NVIDIA Jetson Platforms
Python wheels for NVIDIA's Jetson Nano, Jetson TX1/TX2, Jetson Xavier NX/AGX, and Jetson AGX Orin are provided [here](https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048) and the L4T container is published [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch)
They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv) and [@ptrblck](https://github.com/ptrblck) are maintaining them.
### From Source
#### Prerequisites
If you are installing from source, you will need:
- Python 3.9 or later
- A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required, on Linux)
- Visual Studio or Visual Studio Build Tool (Windows only)
\* PyTorch CI uses Visual C++ BuildTools, which come with Visual Studio Enterprise,
Professional, or Community Editions. You can also install the build tools from
https://visualstudio.microsoft.com/visual-cpp-build-tools/. The build tools *do not*
come with Visual Studio Code by default.
An example of environment setup is shown below:
* Linux:
```bash
$ source <CONDA_INSTALL_DIR>/bin/activate
$ conda create -y -n <CONDA_NAME>
$ conda activate <CONDA_NAME>
```
* Windows:
```bash
$ source <CONDA_INSTALL_DIR>\Scripts\activate.bat
$ conda create -y -n <CONDA_NAME>
$ conda activate <CONDA_NAME>
$ call "C:\Program Files\Microsoft Visual Studio\<VERSION>\Community\VC\Auxiliary\Build\vcvarsall.bat" x64
```
##### NVIDIA CUDA Support
If you want to compile with CUDA support, [select a supported version of CUDA from our support matrix](https://pytorch.org/get-started/locally/), then install the following:
- [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads)
- [NVIDIA cuDNN](https://developer.nvidia.com/cudnn) v8.5 or above
- [Compiler](https://gist.github.com/ax3l/9489132) compatible with CUDA
Note: You could refer to the [cuDNN Support Matrix](https://docs.nvidia.com/deeplearning/cudnn/backend/latest/reference/support-matrix.html) for cuDNN versions with the various supported CUDA, CUDA driver and NVIDIA hardware
If you want to disable CUDA support, export the environment variable `USE_CUDA=0`.
Other potentially useful environment variables may be found in `setup.py`.
If you are building for NVIDIA's Jetson platforms (Jetson Nano, TX1, TX2, AGX Xavier), Instructions to install PyTorch for Jetson Nano are [available here](https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/)
##### AMD ROCm Support
If you want to compile with ROCm support, install
- [AMD ROCm](https://rocm.docs.amd.com/en/latest/deploy/linux/quick_start.html) 4.0 and above installation
- ROCm is currently supported only for Linux systems.
By default the build system expects ROCm to be installed in `/opt/rocm`. If ROCm is installed in a different directory, the `ROCM_PATH` environment variable must be set to the ROCm installation directory. The build system automatically detects the AMD GPU architecture. Optionally, the AMD GPU architecture can be explicitly set with the `PYTORCH_ROCM_ARCH` environment variable [AMD GPU architecture](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html#supported-gpus)
If you want to disable ROCm support, export the environment variable `USE_ROCM=0`.
Other potentially useful environment variables may be found in `setup.py`.
##### Intel GPU Support
If you want to compile with Intel GPU support, follow these
- [PyTorch Prerequisites for Intel GPUs](https://www.intel.com/content/www/us/en/developer/articles/tool/pytorch-prerequisites-for-intel-gpus.html) instructions.
- Intel GPU is supported for Linux and Windows.
If you want to disable Intel GPU support, export the environment variable `USE_XPU=0`.
Other potentially useful environment variables may be found in `setup.py`.
#### Get the PyTorch Source
```bash
git clone https://github.com/pytorch/pytorch
cd pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive
```
#### Install Dependencies
**Common**
```bash
conda install cmake ninja
# Run this command from the PyTorch directory after cloning the source code using the “Get the PyTorch Source“ section below
pip install -r requirements.txt
```
**On Linux**
```bash
pip install mkl-static mkl-include
# CUDA only: Add LAPACK support for the GPU if needed
# magma installation: run with active conda environment. specify CUDA version to install
.ci/docker/common/install_magma_conda.sh 12.4
# (optional) If using torch.compile with inductor/triton, install the matching version of triton
# Run from the pytorch directory after cloning
# For Intel GPU support, please explicitly `export USE_XPU=1` before running command.
make triton
```
**On MacOS**
```bash
# Add this package on intel x86 processor machines only
pip install mkl-static mkl-include
# Add these packages if torch.distributed is needed
conda install pkg-config libuv
```
**On Windows**
```bash
pip install mkl-static mkl-include
# Add these packages if torch.distributed is needed.
# Distributed package support on Windows is a prototype feature and is subject to changes.
conda install -c conda-forge libuv=1.39
```
#### Install PyTorch
**On Linux**
If you're compiling for AMD ROCm then first run this command:
```bash
# Only run this if you're compiling for ROCm
python tools/amd_build/build_amd.py
```
Install PyTorch
```bash
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
python setup.py develop
```
**On macOS**
```bash
python3 setup.py develop
```
**On Windows**
If you want to build legacy python code, please refer to [Building on legacy code and CUDA](https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#building-on-legacy-code-and-cuda)
**CPU-only builds**
In this mode PyTorch computations will run on your CPU, not your GPU.
```cmd
python setup.py develop
```
Note on OpenMP: The desired OpenMP implementation is Intel OpenMP (iomp). In order to link against iomp, you'll need to manually download the library and set up the building environment by tweaking `CMAKE_INCLUDE_PATH` and `LIB`. The instruction [here](https://github.com/pytorch/pytorch/blob/main/docs/source/notes/windows.rst#building-from-source) is an example for setting up both MKL and Intel OpenMP. Without these configurations for CMake, Microsoft Visual C OpenMP runtime (vcomp) will be used.
**CUDA based build**
In this mode PyTorch computations will leverage your GPU via CUDA for faster number crunching
[NVTX](https://docs.nvidia.com/gameworks/content/gameworkslibrary/nvtx/nvidia_tools_extension_library_nvtx.htm) is needed to build Pytorch with CUDA.
NVTX is a part of CUDA distributive, where it is called "Nsight Compute". To install it onto an already installed CUDA run CUDA installation once again and check the corresponding checkbox.
Make sure that CUDA with Nsight Compute is installed after Visual Studio.
Currently, VS 2017 / 2019, and Ninja are supported as the generator of CMake. If `ninja.exe` is detected in `PATH`, then Ninja will be used as the default generator, otherwise, it will use VS 2017 / 2019.
<br/> If Ninja is selected as the generator, the latest MSVC will get selected as the underlying toolchain.
Additional libraries such as
[Magma](https://developer.nvidia.com/magma), [oneDNN, a.k.a. MKLDNN or DNNL](https://github.com/oneapi-src/oneDNN), and [Sccache](https://github.com/mozilla/sccache) are often needed. Please refer to the [installation-helper](https://github.com/pytorch/pytorch/tree/main/.ci/pytorch/win-test-helpers/installation-helpers) to install them.
You can refer to the [build_pytorch.bat](https://github.com/pytorch/pytorch/blob/main/.ci/pytorch/win-test-helpers/build_pytorch.bat) script for some other environment variables configurations
```cmd
cmd
:: Set the environment variables after you have downloaded and unzipped the mkl package,
:: else CMake would throw an error as `Could NOT find OpenMP`.
set CMAKE_INCLUDE_PATH={Your directory}\mkl\include
set LIB={Your directory}\mkl\lib;%LIB%
:: Read the content in the previous section carefully before you proceed.
:: [Optional] If you want to override the underlying toolset used by Ninja and Visual Studio with CUDA, please run the following script block.
:: "Visual Studio 2019 Developer Command Prompt" will be run automatically.
:: Make sure you have CMake >= 3.12 before you do this when you use the Visual Studio generator.
set CMAKE_GENERATOR_TOOLSET_VERSION=14.27
set DISTUTILS_USE_SDK=1
for /f "usebackq tokens=*" %i in (`"%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -version [15^,17^) -products * -latest -property installationPath`) do call "%i\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=%CMAKE_GENERATOR_TOOLSET_VERSION%
:: [Optional] If you want to override the CUDA host compiler
set CUDAHOSTCXX=C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\HostX64\x64\cl.exe
python setup.py develop
```
**Intel GPU builds**
In this mode PyTorch with Intel GPU support will be built.
Please make sure [the common prerequisites](#prerequisites) as well as [the prerequisites for Intel GPU](#intel-gpu-support) are properly installed and the environment variables are configured prior to starting the build. For build tool support, `Visual Studio 2022` is required.
Then PyTorch can be built with the command:
```cmd
:: CMD Commands:
:: Set the CMAKE_PREFIX_PATH to help find corresponding packages
:: %CONDA_PREFIX% only works after `conda activate custom_env`
if defined CMAKE_PREFIX_PATH (
set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library;%CMAKE_PREFIX_PATH%"
) else (
set "CMAKE_PREFIX_PATH=%CONDA_PREFIX%\Library"
)
python setup.py develop
```
##### Adjust Build Options (Optional)
You can adjust the configuration of cmake variables optionally (without building first), by doing
the following. For example, adjusting the pre-detected directories for CuDNN or BLAS can be done
with such a step.
On Linux
```bash
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
python setup.py build --cmake-only
ccmake build # or cmake-gui build
```
On macOS
```bash
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py build --cmake-only
ccmake build # or cmake-gui build
```
### Docker Image
#### Using pre-built images
You can also pull a pre-built docker image from Docker Hub and run with docker v19.03+
```bash
docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest
```
Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g.
for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you
should increase shared memory size either with `--ipc=host` or `--shm-size` command line options to `nvidia-docker run`.
#### Building the image yourself
**NOTE:** Must be built with a docker version > 18.06
The `Dockerfile` is supplied to build images with CUDA 11.1 support and cuDNN v8.
You can pass `PYTHON_VERSION=x.y` make variable to specify which Python version is to be used by Miniconda, or leave it
unset to use the default.
```bash
make -f docker.Makefile
# images are tagged as docker.io/${your_docker_username}/pytorch
```
You can also pass the `CMAKE_VARS="..."` environment variable to specify additional CMake variables to be passed to CMake during the build.
See [setup.py](./setup.py) for the list of available variables.
```bash
make -f docker.Makefile
```
### Building the Documentation
To build documentation in various formats, you will need [Sphinx](http://www.sphinx-doc.org)
and the pytorch_sphinx_theme2.
Before you build the documentation locally, ensure `torch` is
installed in your environment. For small fixes, you can install the
nightly version as described in [Getting Started](https://pytorch.org/get-started/locally/).
For more complex fixes, such as adding a new module and docstrings for
the new module, you might need to install torch [from source](#from-source).
See [Docstring Guidelines](https://github.com/pytorch/pytorch/wiki/Docstring-Guidelines)
for docstring conventions.
```bash
cd docs/
pip install -r requirements.txt
make html
make serve
```
Run `make` to get a list of all available output formats.
If you get a katex error run `npm install katex`. If it persists, try
`npm install -g katex`
> [!NOTE]
> If you installed `nodejs` with a different package manager (e.g.,
> `conda`) then `npm` will probably install a version of `katex` that is not
> compatible with your version of `nodejs` and doc builds will fail.
> A combination of versions that is known to work is `[email protected]` and
> `[email protected]`. To install the latter with `npm` you can run
> ```npm install -g [email protected]```
> [!NOTE]
> If you see a numpy incompatibility error, run:
> ```
> pip install 'numpy<2'
> ```
When you make changes to the dependencies run by CI, edit the
`.ci/docker/requirements-docs.txt` file.
#### Building a PDF
To compile a PDF of all PyTorch documentation, ensure you have
`texlive` and LaTeX installed. On macOS, you can install them using:
```
brew install --cask mactex
```
To create the PDF:
1. Run:
```
make latexpdf
```
This will generate the necessary files in the `build/latex` directory.
2. Navigate to this directory and execute:
```
make LATEXOPTS="-interaction=nonstopmode"
```
This will produce a `pytorch.pdf` with the desired content. Run this
command one more time so that it generates the correct table
of contents and index.
> [!NOTE]
> To view the Table of Contents, switch to the **Table of Contents**
> view in your PDF viewer.
### Previous Versions
Installation instructions and binaries for previous PyTorch versions may be found
on [our website](https://pytorch.org/get-started/previous-versions).
## Getting Started
Three-pointers to get you started:
- [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
- [Examples: easy to understand PyTorch code across all domains](https://github.com/pytorch/examples)
- [The API Reference](https://pytorch.org/docs/)
- [Glossary](https://github.com/pytorch/pytorch/blob/main/GLOSSARY.md)
## Resources
* [PyTorch.org](https://pytorch.org/)
* [PyTorch Tutorials](https://pytorch.org/tutorials/)
* [PyTorch Examples](https://github.com/pytorch/examples)
* [PyTorch Models](https://pytorch.org/hub/)
* [Intro to Deep Learning with PyTorch from Udacity](https://www.udacity.com/course/deep-learning-pytorch--ud188)
* [Intro to Machine Learning with PyTorch from Udacity](https://www.udacity.com/course/intro-to-machine-learning-nanodegree--nd229)
* [Deep Neural Networks with PyTorch from Coursera](https://www.coursera.org/learn/deep-neural-networks-with-pytorch)
* [PyTorch Twitter](https://twitter.com/PyTorch)
* [PyTorch Blog](https://pytorch.org/blog/)
* [PyTorch YouTube](https://www.youtube.com/channel/UCWXI5YeOsh03QvJ59PMaXFw)
## Communication
* Forums: Discuss implementations, research, etc. https://discuss.pytorch.org
* GitHub Issues: Bug reports, feature requests, install issues, RFCs, thoughts, etc.
* Slack: The [PyTorch Slack](https://pytorch.slack.com/) hosts a primary audience of moderate to experienced PyTorch users and developers for general chat, online discussions, collaboration, etc. If you are a beginner looking for help, the primary medium is [PyTorch Forums](https://discuss.pytorch.org). If you need a slack invite, please fill this form: https://goo.gl/forms/PP1AGvNHpSaJP8to1
* Newsletter: No-noise, a one-way email newsletter with important announcements about PyTorch. You can sign-up here: https://eepurl.com/cbG0rv
* Facebook Page: Important announcements about PyTorch. https://www.facebook.com/pytorch
* For brand guidelines, please visit our website at [pytorch.org](https://pytorch.org/)
## Releases and Contributing
Typically, PyTorch has three minor releases a year. Please let us know if you encounter a bug by [filing an issue](https://github.com/pytorch/pytorch/issues).
We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.
If you plan to contribute new features, utility functions, or extensions to the core, please first open an issue and discuss the feature with us.
Sending a PR without discussion might end up resulting in a rejected PR because we might be taking the core in a different direction than you might be aware of.
To learn more about making a contribution to Pytorch, please see our [Contribution page](CONTRIBUTING.md). For more information about PyTorch releases, see [Release page](RELEASE.md).
## The Team
PyTorch is a community-driven project with several skillful engineers and researchers contributing to it.
PyTorch is currently maintained by [Soumith Chintala](http://soumith.ch), [Gregory Chanan](https://github.com/gchanan), [Dmytro Dzhulgakov](https://github.com/dzhulgakov), [Edward Yang](https://github.com/ezyang), and [Nikita Shulga](https://github.com/malfet) with major contributions coming from hundreds of talented individuals in various forms and means.
A non-exhaustive but growing list needs to mention: [Trevor Killeen](https://github.com/killeent), [Sasank Chilamkurthy](https://github.com/chsasank), [Sergey Zagoruyko](https://github.com/szagoruyko), [Adam Lerer](https://github.com/adamlerer), [Francisco Massa](https://github.com/fmassa), [Alykhan Tejani](https://github.com/alykhantejani), [Luca Antiga](https://github.com/lantiga), [Alban Desmaison](https://github.com/albanD), [Andreas Koepf](https://github.com/andreaskoepf), [James Bradbury](https://github.com/jekbradbury), [Zeming Lin](https://github.com/ebetica), [Yuandong Tian](https://github.com/yuandong-tian), [Guillaume Lample](https://github.com/glample), [Marat Dukhan](https://github.com/Maratyszcza), [Natalia Gimelshein](https://github.com/ngimel), [Christian Sarofeen](https://github.com/csarofeen), [Martin Raison](https://github.com/martinraison), [Edward Yang](https://github.com/ezyang), [Zachary Devito](https://github.com/zdevito).
Note: This project is unrelated to [hughperkins/pytorch](https://github.com/hughperkins/pytorch) with the same name. Hugh is a valuable contributor to the Torch community and has helped with many things Torch and PyTorch.
## License
PyTorch has a BSD-style license, as found in the [LICENSE](LICENSE) file.
|
https://github.com/pytorch/pytorch
|
setup.py
|
# Welcome to the PyTorch setup.py.
# Environment variables you are probably interested in:
#
# DEBUG
# build with -O0 and -g (debug symbols)
#
# REL_WITH_DEB_INFO
# build with optimizations and -g (debug symbols)
#
# USE_CUSTOM_DEBINFO="path/to/file1.cpp;path/to/file2.cpp"
# build with debug info only for specified files
#
# MAX_JOBS
# maximum number of compile jobs we should use to compile your code
#
# USE_CUDA=0
# disables CUDA build
#
# CFLAGS
# flags to apply to both C and C++ files to be compiled (a quirk of setup.py
# which we have faithfully adhered to in our build system is that CFLAGS
# also applies to C++ files (unless CXXFLAGS is set), in contrast to the
# default behavior of autogoo and cmake build systems.)
#
# A specific flag that can be used is
# -DHAS_TORCH_SHOW_DISPATCH_TRACE
# build with dispatch trace that can be enabled with
# TORCH_SHOW_DISPATCH_TRACE=1 at runtime.
#
# CC
# the C/C++ compiler to use
#
# Environment variables for feature toggles:
#
# DEBUG_CUDA=1
# if used in conjunction with DEBUG or REL_WITH_DEB_INFO, will also
# build CUDA kernels with -lineinfo --source-in-ptx. Note that
# on CUDA 12 this may cause nvcc to OOM, so this is disabled by default.
# USE_CUDNN=0
# disables the cuDNN build
#
# USE_CUSPARSELT=0
# disables the cuSPARSELt build
#
# USE_CUDSS=0
# disables the cuDSS build
#
# USE_CUFILE=0
# disables the cuFile build
#
# USE_FBGEMM=0
# disables the FBGEMM build
#
# USE_KINETO=0
# disables usage of libkineto library for profiling
#
# USE_NUMPY=0
# disables the NumPy build
#
# BUILD_TEST=0
# disables the test build
#
# USE_MKLDNN=0
# disables use of MKLDNN
#
# USE_MKLDNN_ACL
# enables use of Compute Library backend for MKLDNN on Arm;
# USE_MKLDNN must be explicitly enabled.
#
# MKLDNN_CPU_RUNTIME
# MKL-DNN threading mode: TBB or OMP (default)
#
# USE_STATIC_MKL
# Prefer to link with MKL statically - Unix only
# USE_ITT=0
# disable use of Intel(R) VTune Profiler's ITT functionality
#
# USE_NNPACK=0
# disables NNPACK build
#
# USE_DISTRIBUTED=0
# disables distributed (c10d, gloo, mpi, etc.) build
#
# USE_TENSORPIPE=0
# disables distributed Tensorpipe backend build
#
# USE_GLOO=0
# disables distributed gloo backend build
#
# USE_MPI=0
# disables distributed MPI backend build
#
# USE_SYSTEM_NCCL=0
# disables use of system-wide nccl (we will use our submoduled
# copy in third_party/nccl)
#
# USE_OPENMP=0
# disables use of OpenMP for parallelization
#
# USE_FLASH_ATTENTION=0
# disables building flash attention for scaled dot product attention
#
# USE_MEM_EFF_ATTENTION=0
# disables building memory efficient attention for scaled dot product attention
#
# BUILD_BINARY
# enables the additional binaries/ build
#
# ATEN_AVX512_256=TRUE
# ATen AVX2 kernels can use 32 ymm registers, instead of the default 16.
# This option can be used if AVX512 doesn't perform well on a machine.
# The FBGEMM library also uses AVX512_256 kernels on Xeon D processors,
# but it also has some (optimized) assembly code.
#
# PYTORCH_BUILD_VERSION
# PYTORCH_BUILD_NUMBER
# specify the version of PyTorch, rather than the hard-coded version
# in this file; used when we're building binaries for distribution
#
# TORCH_CUDA_ARCH_LIST
# specify which CUDA architectures to build for.
# ie `TORCH_CUDA_ARCH_LIST="6.0;7.0"`
# These are not CUDA versions, instead, they specify what
# classes of NVIDIA hardware we should generate PTX for.
#
# TORCH_XPU_ARCH_LIST
# specify which XPU architectures to build for.
# ie `TORCH_XPU_ARCH_LIST="ats-m150,lnl-m"`
#
# PYTORCH_ROCM_ARCH
# specify which AMD GPU targets to build for.
# ie `PYTORCH_ROCM_ARCH="gfx900;gfx906"`
#
# ONNX_NAMESPACE
# specify a namespace for ONNX built here rather than the hard-coded
# one in this file; needed to build with other frameworks that share ONNX.
#
# BLAS
# BLAS to be used by Caffe2. Can be MKL, Eigen, ATLAS, FlexiBLAS, or OpenBLAS. If set
# then the build will fail if the requested BLAS is not found, otherwise
# the BLAS will be chosen based on what is found on your system.
#
# MKL_THREADING
# MKL threading mode: SEQ, TBB or OMP (default)
#
# USE_ROCM_KERNEL_ASSERT=1
# Enable kernel assert in ROCm platform
#
# Environment variables we respect (these environment variables are
# conventional and are often understood/set by other software.)
#
# CUDA_HOME (Linux/OS X)
# CUDA_PATH (Windows)
# specify where CUDA is installed; usually /usr/local/cuda or
# /usr/local/cuda-x.y
# CUDAHOSTCXX
# specify a different compiler than the system one to use as the CUDA
# host compiler for nvcc.
#
# CUDA_NVCC_EXECUTABLE
# Specify a NVCC to use. This is used in our CI to point to a cached nvcc
#
# CUDNN_LIB_DIR
# CUDNN_INCLUDE_DIR
# CUDNN_LIBRARY
# specify where cuDNN is installed
#
# MIOPEN_LIB_DIR
# MIOPEN_INCLUDE_DIR
# MIOPEN_LIBRARY
# specify where MIOpen is installed
#
# NCCL_ROOT
# NCCL_LIB_DIR
# NCCL_INCLUDE_DIR
# specify where nccl is installed
#
# ACL_ROOT_DIR
# specify where Compute Library is installed
#
# LIBRARY_PATH
# LD_LIBRARY_PATH
# we will search for libraries in these paths
#
# ATEN_THREADING
# ATen parallel backend to use for intra- and inter-op parallelism
# possible values:
# OMP - use OpenMP for intra-op and native backend for inter-op tasks
# NATIVE - use native thread pool for both intra- and inter-op tasks
#
# USE_SYSTEM_LIBS (work in progress)
# Use system-provided libraries to satisfy the build dependencies.
# When turned on, the following cmake variables will be toggled as well:
# USE_SYSTEM_CPUINFO=ON
# USE_SYSTEM_SLEEF=ON
# USE_SYSTEM_GLOO=ON
# BUILD_CUSTOM_PROTOBUF=OFF
# USE_SYSTEM_EIGEN_INSTALL=ON
# USE_SYSTEM_FP16=ON
# USE_SYSTEM_PTHREADPOOL=ON
# USE_SYSTEM_PSIMD=ON
# USE_SYSTEM_FXDIV=ON
# USE_SYSTEM_BENCHMARK=ON
# USE_SYSTEM_ONNX=ON
# USE_SYSTEM_XNNPACK=ON
# USE_SYSTEM_PYBIND11=ON
# USE_SYSTEM_NCCL=ON
# USE_SYSTEM_NVTX=ON
#
# USE_MIMALLOC
# Static link mimalloc into C10, and use mimalloc in alloc_cpu & alloc_free.
# By default, It is only enabled on Windows.
#
# USE_PRIORITIZED_TEXT_FOR_LD
# Uses prioritized text form cmake/prioritized_text.txt for LD
#
# BUILD_LIBTORCH_WHL
# Builds libtorch.so and its dependencies as a wheel
#
# BUILD_PYTHON_ONLY
# Builds pytorch as a wheel using libtorch.so from a separate wheel
import os
import sys
if sys.platform == "win32" and sys.maxsize.bit_length() == 31:
print(
"32-bit Windows Python runtime is not supported. Please switch to 64-bit Python."
)
sys.exit(-1)
import platform
BUILD_LIBTORCH_WHL = os.getenv("BUILD_LIBTORCH_WHL", "0") == "1"
BUILD_PYTHON_ONLY = os.getenv("BUILD_PYTHON_ONLY", "0") == "1"
python_min_version = (3, 9, 0)
python_min_version_str = ".".join(map(str, python_min_version))
if sys.version_info < python_min_version:
print(
f"You are using Python {platform.python_version()}. Python >={python_min_version_str} is required."
)
sys.exit(-1)
import filecmp
import glob
import importlib
import importlib.util
import json
import shutil
import subprocess
import sysconfig
import time
from collections import defaultdict
import setuptools.command.build_ext
import setuptools.command.install
import setuptools.command.sdist
from setuptools import Extension, find_packages, setup
from setuptools.dist import Distribution
from tools.build_pytorch_libs import build_pytorch
from tools.generate_torch_version import get_torch_version
from tools.setup_helpers.cmake import CMake
from tools.setup_helpers.env import build_type, IS_DARWIN, IS_LINUX, IS_WINDOWS
from tools.setup_helpers.generate_linker_script import gen_linker_script
def _get_package_path(package_name):
spec = importlib.util.find_spec(package_name)
if spec:
# The package might be a namespace package, so get_data may fail
try:
loader = spec.loader
if loader is not None:
file_path = loader.get_filename() # type: ignore[attr-defined]
return os.path.dirname(file_path)
except AttributeError:
pass
return None
# set up appropriate env variables
if BUILD_LIBTORCH_WHL:
# Set up environment variables for ONLY building libtorch.so and not libtorch_python.so
# functorch is not supported without python
os.environ["BUILD_FUNCTORCH"] = "OFF"
if BUILD_PYTHON_ONLY:
os.environ["BUILD_LIBTORCHLESS"] = "ON"
os.environ["LIBTORCH_LIB_PATH"] = f"{_get_package_path('torch')}/lib"
################################################################################
# Parameters parsed from environment
################################################################################
VERBOSE_SCRIPT = True
RUN_BUILD_DEPS = True
# see if the user passed a quiet flag to setup.py arguments and respect
# that in our parts of the build
EMIT_BUILD_WARNING = False
RERUN_CMAKE = False
CMAKE_ONLY = False
filtered_args = []
for i, arg in enumerate(sys.argv):
if arg == "--cmake":
RERUN_CMAKE = True
continue
if arg == "--cmake-only":
# Stop once cmake terminates. Leave users a chance to adjust build
# options.
CMAKE_ONLY = True
continue
if arg == "rebuild" or arg == "build":
arg = "build" # rebuild is gone, make it build
EMIT_BUILD_WARNING = True
if arg == "--":
filtered_args += sys.argv[i:]
break
if arg == "-q" or arg == "--quiet":
VERBOSE_SCRIPT = False
if arg in ["clean", "egg_info", "sdist"]:
RUN_BUILD_DEPS = False
filtered_args.append(arg)
sys.argv = filtered_args
if VERBOSE_SCRIPT:
def report(*args):
print(*args)
else:
def report(*args):
pass
# Make distutils respect --quiet too
setuptools.distutils.log.warn = report
# Constant known variables used throughout this file
cwd = os.path.dirname(os.path.abspath(__file__))
lib_path = os.path.join(cwd, "torch", "lib")
third_party_path = os.path.join(cwd, "third_party")
# CMAKE: full path to python library
if IS_WINDOWS:
cmake_python_library = "{}/libs/python{}.lib".format(
sysconfig.get_config_var("prefix"), sysconfig.get_config_var("VERSION")
)
# Fix virtualenv builds
if not os.path.exists(cmake_python_library):
cmake_python_library = "{}/libs/python{}.lib".format(
sys.base_prefix, sysconfig.get_config_var("VERSION")
)
else:
cmake_python_library = "{}/{}".format(
sysconfig.get_config_var("LIBDIR"), sysconfig.get_config_var("INSTSONAME")
)
cmake_python_include_dir = sysconfig.get_path("include")
################################################################################
# Version, create_version_file, and package_name
################################################################################
package_name = os.getenv("TORCH_PACKAGE_NAME", "torch")
LIBTORCH_PKG_NAME = os.getenv("LIBTORCH_PACKAGE_NAME", "torch_no_python")
if BUILD_LIBTORCH_WHL:
package_name = LIBTORCH_PKG_NAME
package_type = os.getenv("PACKAGE_TYPE", "wheel")
version = get_torch_version()
report(f"Building wheel {package_name}-{version}")
cmake = CMake()
def get_submodule_folders():
git_modules_path = os.path.join(cwd, ".gitmodules")
default_modules_path = [
os.path.join(third_party_path, name)
for name in [
"gloo",
"cpuinfo",
"onnx",
"fbgemm",
"cutlass",
]
]
if not os.path.exists(git_modules_path):
return default_modules_path
with open(git_modules_path) as f:
return [
os.path.join(cwd, line.split("=", 1)[1].strip())
for line in f
if line.strip().startswith("path")
]
def check_submodules():
def check_for_files(folder, files):
if not any(os.path.exists(os.path.join(folder, f)) for f in files):
report("Could not find any of {} in {}".format(", ".join(files), folder))
report("Did you run 'git submodule update --init --recursive'?")
sys.exit(1)
def not_exists_or_empty(folder):
return not os.path.exists(folder) or (
os.path.isdir(folder) and len(os.listdir(folder)) == 0
)
if bool(os.getenv("USE_SYSTEM_LIBS", False)):
return
folders = get_submodule_folders()
# If none of the submodule folders exists, try to initialize them
if all(not_exists_or_empty(folder) for folder in folders):
try:
report(" --- Trying to initialize submodules")
start = time.time()
subprocess.check_call(
["git", "submodule", "update", "--init", "--recursive"], cwd=cwd
)
end = time.time()
report(f" --- Submodule initialization took {end - start:.2f} sec")
except Exception:
report(" --- Submodule initalization failed")
report("Please run:\n\tgit submodule update --init --recursive")
sys.exit(1)
for folder in folders:
check_for_files(
folder,
[
"CMakeLists.txt",
"Makefile",
"setup.py",
"LICENSE",
"LICENSE.md",
"LICENSE.txt",
],
)
check_for_files(
os.path.join(third_party_path, "fbgemm", "external", "asmjit"),
["CMakeLists.txt"],
)
# Windows has very bad support for symbolic links.
# Instead of using symlinks, we're going to copy files over
def mirror_files_into_torchgen():
# (new_path, orig_path)
# Directories are OK and are recursively mirrored.
paths = [
(
"torchgen/packaged/ATen/native/native_functions.yaml",
"aten/src/ATen/native/native_functions.yaml",
),
("torchgen/packaged/ATen/native/tags.yaml", "aten/src/ATen/native/tags.yaml"),
("torchgen/packaged/ATen/templates", "aten/src/ATen/templates"),
("torchgen/packaged/autograd", "tools/autograd"),
("torchgen/packaged/autograd/templates", "tools/autograd/templates"),
]
for new_path, orig_path in paths:
# Create the dirs involved in new_path if they don't exist
if not os.path.exists(new_path):
os.makedirs(os.path.dirname(new_path), exist_ok=True)
# Copy the files from the orig location to the new location
if os.path.isfile(orig_path):
shutil.copyfile(orig_path, new_path)
continue
if os.path.isdir(orig_path):
if os.path.exists(new_path):
# copytree fails if the tree exists already, so remove it.
shutil.rmtree(new_path)
shutil.copytree(orig_path, new_path)
continue
raise RuntimeError("Check the file paths in `mirror_files_into_torchgen()`")
# all the work we need to do _before_ setup runs
def build_deps():
report("-- Building version " + version)
check_submodules()
check_pydep("yaml", "pyyaml")
build_python = not BUILD_LIBTORCH_WHL
build_pytorch(
version=version,
cmake_python_library=cmake_python_library,
build_python=build_python,
rerun_cmake=RERUN_CMAKE,
cmake_only=CMAKE_ONLY,
cmake=cmake,
)
if CMAKE_ONLY:
report(
'Finished running cmake. Run "ccmake build" or '
'"cmake-gui build" to adjust build options and '
'"python setup.py install" to build.'
)
sys.exit()
# Use copies instead of symbolic files.
# Windows has very poor support for them.
sym_files = [
"tools/shared/_utils_internal.py",
"torch/utils/benchmark/utils/valgrind_wrapper/callgrind.h",
"torch/utils/benchmark/utils/valgrind_wrapper/valgrind.h",
]
orig_files = [
"torch/_utils_internal.py",
"third_party/valgrind-headers/callgrind.h",
"third_party/valgrind-headers/valgrind.h",
]
for sym_file, orig_file in zip(sym_files, orig_files):
same = False
if os.path.exists(sym_file):
if filecmp.cmp(sym_file, orig_file):
same = True
else:
os.remove(sym_file)
if not same:
shutil.copyfile(orig_file, sym_file)
################################################################################
# Building dependent libraries
################################################################################
missing_pydep = """
Missing build dependency: Unable to `import {importname}`.
Please install it via `conda install {module}` or `pip install {module}`
""".strip()
def check_pydep(importname, module):
try:
importlib.import_module(importname)
except ImportError as e:
raise RuntimeError(
missing_pydep.format(importname=importname, module=module)
) from e
class build_ext(setuptools.command.build_ext.build_ext):
def _embed_libomp(self):
# Copy libiomp5.dylib/libomp.dylib inside the wheel package on MacOS
lib_dir = os.path.join(self.build_lib, "torch", "lib")
libtorch_cpu_path = os.path.join(lib_dir, "libtorch_cpu.dylib")
if not os.path.exists(libtorch_cpu_path):
return
# Parse libtorch_cpu load commands
otool_cmds = (
subprocess.check_output(["otool", "-l", libtorch_cpu_path])
.decode("utf-8")
.split("\n")
)
rpaths, libs = [], []
for idx, line in enumerate(otool_cmds):
if line.strip() == "cmd LC_LOAD_DYLIB":
lib_name = otool_cmds[idx + 2].strip()
assert lib_name.startswith("name ")
libs.append(lib_name.split(" ", 1)[1].rsplit("(", 1)[0][:-1])
if line.strip() == "cmd LC_RPATH":
rpath = otool_cmds[idx + 2].strip()
assert rpath.startswith("path ")
rpaths.append(rpath.split(" ", 1)[1].rsplit("(", 1)[0][:-1])
omplib_path = get_cmake_cache_vars()["OpenMP_libomp_LIBRARY"]
omplib_name = get_cmake_cache_vars()["OpenMP_C_LIB_NAMES"] + ".dylib"
omplib_rpath_path = os.path.join("@rpath", omplib_name)
# This logic is fragile and checks only two cases:
# - libtorch_cpu depends on `@rpath/libomp.dylib`e (happens when built inside miniconda environment)
# - libtorch_cpu depends on `/abs/path/to/libomp.dylib` (happens when built with libomp from homebrew)
if not any(c in libs for c in [omplib_path, omplib_rpath_path]):
return
# Copy libomp/libiomp5 from rpath locations
target_lib = os.path.join(self.build_lib, "torch", "lib", omplib_name)
libomp_relocated = False
for rpath in rpaths:
source_lib = os.path.join(rpath, omplib_name)
if not os.path.exists(source_lib):
continue
self.copy_file(source_lib, target_lib)
# Delete old rpath and add @loader_lib to the rpath
# This should prevent delocate from attempting to package another instance
# of OpenMP library in torch wheel as well as loading two libomp.dylib into
# the address space, as libraries are cached by their unresolved names
install_name_tool_args = [
"-rpath",
rpath,
"@loader_path",
]
libomp_relocated = True
break
if not libomp_relocated and os.path.exists(omplib_path):
self.copy_file(omplib_path, target_lib)
install_name_tool_args = [
"-change",
omplib_path,
omplib_rpath_path,
]
if "@loader_path" not in rpaths:
install_name_tool_args += [
"-add_rpath",
"@loader_path",
]
libomp_relocated = True
if libomp_relocated:
install_name_tool_args.insert(0, "install_name_tool")
install_name_tool_args.append(libtorch_cpu_path)
subprocess.check_call(install_name_tool_args)
# Copy omp.h from OpenMP_C_FLAGS and copy it into include folder
omp_cflags = get_cmake_cache_vars()["OpenMP_C_FLAGS"]
if not omp_cflags:
return
for include_dir in [f[2:] for f in omp_cflags.split(" ") if f.startswith("-I")]:
omp_h = os.path.join(include_dir, "omp.h")
if not os.path.exists(omp_h):
continue
target_omp_h = os.path.join(self.build_lib, "torch", "include", "omp.h")
self.copy_file(omp_h, target_omp_h)
break
def run(self):
# Report build options. This is run after the build completes so # `CMakeCache.txt` exists and we can get an
# accurate report on what is used and what is not.
cmake_cache_vars = defaultdict(lambda: False, cmake.get_cmake_cache_variables())
if cmake_cache_vars["USE_NUMPY"]:
report("-- Building with NumPy bindings")
else:
report("-- NumPy not found")
if cmake_cache_vars["USE_CUDNN"]:
report(
"-- Detected cuDNN at "
+ cmake_cache_vars["CUDNN_LIBRARY"]
+ ", "
+ cmake_cache_vars["CUDNN_INCLUDE_DIR"]
)
else:
report("-- Not using cuDNN")
if cmake_cache_vars["USE_CUDA"]:
report("-- Detected CUDA at " + cmake_cache_vars["CUDA_TOOLKIT_ROOT_DIR"])
else:
report("-- Not using CUDA")
if cmake_cache_vars["USE_XPU"]:
report("-- Detected XPU runtime at " + cmake_cache_vars["SYCL_LIBRARY_DIR"])
else:
report("-- Not using XPU")
if cmake_cache_vars["USE_MKLDNN"]:
report("-- Using MKLDNN")
if cmake_cache_vars["USE_MKLDNN_ACL"]:
report("-- Using Compute Library for the Arm architecture with MKLDNN")
else:
report(
"-- Not using Compute Library for the Arm architecture with MKLDNN"
)
if cmake_cache_vars["USE_MKLDNN_CBLAS"]:
report("-- Using CBLAS in MKLDNN")
else:
report("-- Not using CBLAS in MKLDNN")
else:
report("-- Not using MKLDNN")
if cmake_cache_vars["USE_NCCL"] and cmake_cache_vars["USE_SYSTEM_NCCL"]:
report(
"-- Using system provided NCCL library at {}, {}".format(
cmake_cache_vars["NCCL_LIBRARIES"],
cmake_cache_vars["NCCL_INCLUDE_DIRS"],
)
)
elif cmake_cache_vars["USE_NCCL"]:
report("-- Building NCCL library")
else:
report("-- Not using NCCL")
if cmake_cache_vars["USE_DISTRIBUTED"]:
if IS_WINDOWS:
report("-- Building without distributed package")
else:
report("-- Building with distributed package: ")
report(
" -- USE_TENSORPIPE={}".format(cmake_cache_vars["USE_TENSORPIPE"])
)
report(" -- USE_GLOO={}".format(cmake_cache_vars["USE_GLOO"]))
report(" -- USE_MPI={}".format(cmake_cache_vars["USE_OPENMPI"]))
else:
report("-- Building without distributed package")
if cmake_cache_vars["STATIC_DISPATCH_BACKEND"]:
report(
"-- Using static dispatch with backend {}".format(
cmake_cache_vars["STATIC_DISPATCH_BACKEND"]
)
)
if cmake_cache_vars["USE_LIGHTWEIGHT_DISPATCH"]:
report("-- Using lightweight dispatch")
if cmake_cache_vars["BUILD_EXECUTORCH"]:
report("-- Building Executorch")
if cmake_cache_vars["USE_ITT"]:
report("-- Using ITT")
else:
report("-- Not using ITT")
# Do not use clang to compile extensions if `-fstack-clash-protection` is defined
# in system CFLAGS
c_flags = str(os.getenv("CFLAGS", ""))
if (
IS_LINUX
and "-fstack-clash-protection" in c_flags
and "clang" in os.environ.get("CC", "")
):
os.environ["CC"] = str(os.environ["CC"])
# It's an old-style class in Python 2.7...
setuptools.command.build_ext.build_ext.run(self)
if IS_DARWIN:
self._embed_libomp()
# Copy the essential export library to compile C++ extensions.
if IS_WINDOWS:
build_temp = self.build_temp
ext_filename = self.get_ext_filename("_C")
lib_filename = ".".join(ext_filename.split(".")[:-1]) + ".lib"
export_lib = os.path.join(
build_temp, "torch", "csrc", lib_filename
).replace("\\", "/")
build_lib = self.build_lib
target_lib = os.path.join(build_lib, "torch", "lib", "_C.lib").replace(
"\\", "/"
)
# Create "torch/lib" directory if not exists.
# (It is not created yet in "develop" mode.)
target_dir = os.path.dirname(target_lib)
if not os.path.exists(target_dir):
os.makedirs(target_dir)
self.copy_file(export_lib, target_lib)
# In ROCm on Windows case copy rocblas and hipblaslt files into
# torch/lib/rocblas/library and torch/lib/hipblaslt/library
use_rocm = os.environ.get("USE_ROCM")
if use_rocm:
rocm_dir_path = os.environ.get("ROCM_DIR")
rocm_bin_path = os.path.join(rocm_dir_path, "bin")
rocblas_dir = os.path.join(rocm_bin_path, "rocblas")
target_rocblas_dir = os.path.join(target_dir, "rocblas")
os.makedirs(target_rocblas_dir, exist_ok=True)
self.copy_tree(rocblas_dir, target_rocblas_dir)
hipblaslt_dir = os.path.join(rocm_bin_path, "hipblaslt")
target_hipblaslt_dir = os.path.join(target_dir, "hipblaslt")
os.makedirs(target_hipblaslt_dir, exist_ok=True)
self.copy_tree(hipblaslt_dir, target_hipblaslt_dir)
else:
report("The specified environment variable does not exist.")
def build_extensions(self):
self.create_compile_commands()
# Copy functorch extension
for i, ext in enumerate(self.extensions):
if ext.name != "functorch._C":
continue
fullname = self.get_ext_fullname(ext.name)
filename = self.get_ext_filename(fullname)
fileext = os.path.splitext(filename)[1]
src = os.path.join(os.path.dirname(filename), "functorch" + fileext)
dst = os.path.join(os.path.realpath(self.build_lib), filename)
if os.path.exists(src):
report(f"Copying {ext.name} from {src} to {dst}")
dst_dir = os.path.dirname(dst)
if not os.path.exists(dst_dir):
os.makedirs(dst_dir)
self.copy_file(src, dst)
setuptools.command.build_ext.build_ext.build_extensions(self)
def get_outputs(self):
outputs = setuptools.command.build_ext.build_ext.get_outputs(self)
outputs.append(os.path.join(self.build_lib, "caffe2"))
report(f"setup.py::get_outputs returning {outputs}")
return outputs
def create_compile_commands(self):
def load(filename):
with open(filename) as f:
return json.load(f)
ninja_files = glob.glob("build/*compile_commands.json")
cmake_files = glob.glob("torch/lib/build/*/compile_commands.json")
all_commands = [entry for f in ninja_files + cmake_files for entry in load(f)]
# cquery does not like c++ compiles that start with gcc.
# It forgets to include the c++ header directories.
# We can work around this by replacing the gcc calls that python
# setup.py generates with g++ calls instead
for command in all_commands:
if command["command"].startswith("gcc "):
command["command"] = "g++ " + command["command"][4:]
new_contents = json.dumps(all_commands, indent=2)
contents = ""
if os.path.exists("compile_commands.json"):
with open("compile_commands.json") as f:
contents = f.read()
if contents != new_contents:
with open("compile_commands.json", "w") as f:
f.write(new_contents)
class concat_license_files:
"""Merge LICENSE and LICENSES_BUNDLED.txt as a context manager
LICENSE is the main PyTorch license, LICENSES_BUNDLED.txt is auto-generated
from all the licenses found in ./third_party/. We concatenate them so there
is a single license file in the sdist and wheels with all of the necessary
licensing info.
"""
def __init__(self, include_files=False):
self.f1 = "LICENSE"
self.f2 = "third_party/LICENSES_BUNDLED.txt"
self.include_files = include_files
def __enter__(self):
"""Concatenate files"""
old_path = sys.path
sys.path.append(third_party_path)
try:
from build_bundled import create_bundled
finally:
sys.path = old_path
with open(self.f1) as f1:
self.bsd_text = f1.read()
with open(self.f1, "a") as f1:
f1.write("\n\n")
create_bundled(
os.path.relpath(third_party_path), f1, include_files=self.include_files
)
def __exit__(self, exception_type, exception_value, traceback):
"""Restore content of f1"""
with open(self.f1, "w") as f:
f.write(self.bsd_text)
try:
from wheel.bdist_wheel import bdist_wheel
except ImportError:
# This is useful when wheel is not installed and bdist_wheel is not
# specified on the command line. If it _is_ specified, parsing the command
# line will fail before wheel_concatenate is needed
wheel_concatenate = None
else:
# Need to create the proper LICENSE.txt for the wheel
class wheel_concatenate(bdist_wheel):
"""check submodules on sdist to prevent incomplete tarballs"""
def run(self):
with concat_license_files(include_files=True):
super().run()
def write_wheelfile(self, *args, **kwargs):
super().write_wheelfile(*args, **kwargs)
if BUILD_LIBTORCH_WHL:
# Remove extraneneous files in the libtorch wheel
for root, dirs, files in os.walk(self.bdist_dir):
for file in files:
if file.endswith((".a", ".so")) and os.path.isfile(
os.path.join(self.bdist_dir, file)
):
os.remove(os.path.join(root, file))
elif file.endswith(".py"):
os.remove(os.path.join(root, file))
# need an __init__.py file otherwise we wouldn't have a package
open(os.path.join(self.bdist_dir, "torch", "__init__.py"), "w").close()
class install(setuptools.command.install.install):
def run(self):
super().run()
class clean(setuptools.Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
import glob
import re
with open(".gitignore") as f:
ignores = f.read()
pat = re.compile(r"^#( BEGIN NOT-CLEAN-FILES )?")
for wildcard in filter(None, ignores.split("\n")):
match = pat.match(wildcard)
if match:
if match.group(1):
# Marker is found and stop reading .gitignore.
break
# Ignore lines which begin with '#'.
else:
# Don't remove absolute paths from the system
wildcard = wildcard.lstrip("./")
for filename in glob.glob(wildcard):
try:
os.remove(filename)
except OSError:
shutil.rmtree(filename, ignore_errors=True)
class sdist(setuptools.command.sdist.sdist):
def run(self):
with concat_license_files():
super().run()
def get_cmake_cache_vars():
try:
return defaultdict(lambda: False, cmake.get_cmake_cache_variables())
except FileNotFoundError:
# CMakeCache.txt does not exist. Probably running "python setup.py clean" over a clean directory.
return defaultdict(lambda: False)
def configure_extension_build():
r"""Configures extension build options according to system environment and user's choice.
Returns:
The input to parameters ext_modules, cmdclass, packages, and entry_points as required in setuptools.setup.
"""
cmake_cache_vars = get_cmake_cache_vars()
################################################################################
# Configure compile flags
################################################################################
library_dirs = []
extra_install_requires = []
if IS_WINDOWS:
# /NODEFAULTLIB makes sure we only link to DLL runtime
# and matches the flags set for protobuf and ONNX
extra_link_args = ["/NODEFAULTLIB:LIBCMT.LIB"]
# /MD links against DLL runtime
# and matches the flags set for protobuf and ONNX
# /EHsc is about standard C++ exception handling
extra_compile_args = ["/MD", "/FS", "/EHsc"]
else:
extra_link_args = []
extra_compile_args = [
"-Wall",
"-Wextra",
"-Wno-strict-overflow",
"-Wno-unused-parameter",
"-Wno-missing-field-initializers",
"-Wno-unknown-pragmas",
# Python 2.6 requires -fno-strict-aliasing, see
# http://legacy.python.org/dev/peps/pep-3123/
# We also depend on it in our code (even Python 3).
"-fno-strict-aliasing",
]
library_dirs.append(lib_path)
main_compile_args = []
main_libraries = ["torch_python"]
main_link_args = []
main_sources = ["torch/csrc/stub.c"]
if BUILD_LIBTORCH_WHL:
main_libraries = ["torch"]
main_sources = []
if build_type.is_debug():
if IS_WINDOWS:
extra_compile_args.append("/Z7")
extra_link_args.append("/DEBUG:FULL")
else:
extra_compile_args += ["-O0", "-g"]
extra_link_args += ["-O0", "-g"]
if build_type.is_rel_with_deb_info():
if IS_WINDOWS:
extra_compile_args.append("/Z7")
extra_link_args.append("/DEBUG:FULL")
else:
extra_compile_args += ["-g"]
extra_link_args += ["-g"]
# pypi cuda package that requires installation of cuda runtime, cudnn and cublas
# should be included in all wheels uploaded to pypi
pytorch_extra_install_requirements = os.getenv(
"PYTORCH_EXTRA_INSTALL_REQUIREMENTS", ""
)
if pytorch_extra_install_requirements:
report(
f"pytorch_extra_install_requirements: {pytorch_extra_install_requirements}"
)
extra_install_requires += pytorch_extra_install_requirements.split("|")
# Cross-compile for M1
if IS_DARWIN:
macos_target_arch = os.getenv("CMAKE_OSX_ARCHITECTURES", "")
if macos_target_arch in ["arm64", "x86_64"]:
macos_sysroot_path = os.getenv("CMAKE_OSX_SYSROOT")
if macos_sysroot_path is None:
macos_sysroot_path = (
subprocess.check_output(
["xcrun", "--show-sdk-path", "--sdk", "macosx"]
)
.decode("utf-8")
.strip()
)
extra_compile_args += [
"-arch",
macos_target_arch,
"-isysroot",
macos_sysroot_path,
]
extra_link_args += ["-arch", macos_target_arch]
def make_relative_rpath_args(path):
if IS_DARWIN:
return ["-Wl,-rpath,@loader_path/" + path]
elif IS_WINDOWS:
return []
else:
return ["-Wl,-rpath,$ORIGIN/" + path]
################################################################################
# Declare extensions and package
################################################################################
extensions = []
excludes = ["tools", "tools.*", "caffe2", "caffe2.*"]
if not cmake_cache_vars["BUILD_FUNCTORCH"]:
excludes.extend(["functorch", "functorch.*"])
packages = find_packages(exclude=excludes)
C = Extension(
"torch._C",
libraries=main_libraries,
sources=main_sources,
language="c",
extra_compile_args=main_compile_args + extra_compile_args,
include_dirs=[],
library_dirs=library_dirs,
extra_link_args=extra_link_args
+ main_link_args
+ make_relative_rpath_args("lib"),
)
extensions.append(C)
# These extensions are built by cmake and copied manually in build_extensions()
# inside the build_ext implementation
if cmake_cache_vars["BUILD_FUNCTORCH"]:
extensions.append(
Extension(name="functorch._C", sources=[]),
)
cmdclass = {
"bdist_wheel": wheel_concatenate,
"build_ext": build_ext,
"clean": clean,
"install": install,
"sdist": sdist,
}
entry_points = {
"console_scripts": [
"torchrun = torch.distributed.run:main",
],
"torchrun.logs_specs": [
"default = torch.distributed.elastic.multiprocessing:DefaultLogsSpecs",
],
}
if cmake_cache_vars["USE_DISTRIBUTED"]:
# Only enable fr_trace command if distributed is enabled
entry_points["console_scripts"].append(
"torchfrtrace = tools.flight_recorder.fr_trace:main",
)
return extensions, cmdclass, packages, entry_points, extra_install_requires
# post run, warnings, printed at the end to make them more visible
build_update_message = """
It is no longer necessary to use the 'build' or 'rebuild' targets
To install:
$ python setup.py install
To develop locally:
$ python setup.py develop
To force cmake to re-generate native build files (off by default):
$ python setup.py develop --cmake
"""
def print_box(msg):
lines = msg.split("\n")
size = max(len(l) + 1 for l in lines)
print("-" * (size + 2))
for l in lines:
print("|{}{}|".format(l, " " * (size - len(l))))
print("-" * (size + 2))
def main():
if BUILD_LIBTORCH_WHL and BUILD_PYTHON_ONLY:
raise RuntimeError(
"Conflict: 'BUILD_LIBTORCH_WHL' and 'BUILD_PYTHON_ONLY' can't both be 1. Set one to 0 and rerun."
)
install_requires = [
"filelock",
"typing-extensions>=4.10.0",
'setuptools ; python_version >= "3.12"',
"sympy>=1.13.3",
"networkx",
"jinja2",
"fsspec",
]
if BUILD_PYTHON_ONLY:
install_requires.append(f"{LIBTORCH_PKG_NAME}=={get_torch_version()}")
use_prioritized_text = str(os.getenv("USE_PRIORITIZED_TEXT_FOR_LD", ""))
if (
use_prioritized_text == ""
and platform.system() == "Linux"
and platform.processor() == "aarch64"
):
print_box(
"""
WARNING: we strongly recommend enabling linker script optimization for ARM + CUDA.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
"""
)
if use_prioritized_text == "1" or use_prioritized_text == "True":
gen_linker_script(
filein="cmake/prioritized_text.txt", fout="cmake/linker_script.ld"
)
linker_script_path = os.path.abspath("cmake/linker_script.ld")
os.environ["LDFLAGS"] = os.getenv("LDFLAGS", "") + f" -T{linker_script_path}"
os.environ["CFLAGS"] = (
os.getenv("CFLAGS", "") + " -ffunction-sections -fdata-sections"
)
os.environ["CXXFLAGS"] = (
os.getenv("CXXFLAGS", "") + " -ffunction-sections -fdata-sections"
)
# Parse the command line and check the arguments before we proceed with
# building deps and setup. We need to set values so `--help` works.
dist = Distribution()
dist.script_name = os.path.basename(sys.argv[0])
dist.script_args = sys.argv[1:]
try:
dist.parse_command_line()
except setuptools.distutils.errors.DistutilsArgError as e:
print(e)
sys.exit(1)
mirror_files_into_torchgen()
if RUN_BUILD_DEPS:
build_deps()
(
extensions,
cmdclass,
packages,
entry_points,
extra_install_requires,
) = configure_extension_build()
install_requires += extra_install_requires
extras_require = {
"optree": ["optree>=0.13.0"],
"opt-einsum": ["opt-einsum>=3.3"],
"pyyaml": ["pyyaml"],
}
# Read in README.md for our long_description
with open(os.path.join(cwd, "README.md"), encoding="utf-8") as f:
long_description = f.read()
version_range_max = max(sys.version_info[1], 13) + 1
torch_package_data = [
"py.typed",
"bin/*",
"test/*",
"*.pyi",
"**/*.pyi",
"lib/*.pdb",
"lib/**/*.pdb",
"lib/*shm*",
"lib/torch_shm_manager",
"lib/*.h",
"lib/**/*.h",
"include/*.h",
"include/**/*.h",
"include/*.hpp",
"include/**/*.hpp",
"include/*.cuh",
"include/**/*.cuh",
"_inductor/codegen/*.h",
"_inductor/codegen/aoti_runtime/*.cpp",
"_inductor/script.ld",
"_export/serde/*.yaml",
"_export/serde/*.thrift",
"share/cmake/ATen/*.cmake",
"share/cmake/Caffe2/*.cmake",
"share/cmake/Caffe2/public/*.cmake",
"share/cmake/Caffe2/Modules_CUDA_fix/*.cmake",
"share/cmake/Caffe2/Modules_CUDA_fix/upstream/*.cmake",
"share/cmake/Caffe2/Modules_CUDA_fix/upstream/FindCUDA/*.cmake",
"share/cmake/Gloo/*.cmake",
"share/cmake/Tensorpipe/*.cmake",
"share/cmake/Torch/*.cmake",
"utils/benchmark/utils/*.cpp",
"utils/benchmark/utils/valgrind_wrapper/*.cpp",
"utils/benchmark/utils/valgrind_wrapper/*.h",
"utils/model_dump/skeleton.html",
"utils/model_dump/code.js",
"utils/model_dump/*.mjs",
]
if not BUILD_LIBTORCH_WHL:
torch_package_data.extend(
[
"lib/libtorch_python.so",
"lib/libtorch_python.dylib",
"lib/libtorch_python.dll",
]
)
if not BUILD_PYTHON_ONLY:
torch_package_data.extend(
[
"lib/*.so*",
"lib/*.dylib*",
"lib/*.dll",
"lib/*.lib",
]
)
aotriton_image_path = os.path.join(lib_path, "aotriton.images")
aks2_files = []
for root, dirs, files in os.walk(aotriton_image_path):
subpath = os.path.relpath(root, start=aotriton_image_path)
for fn in files:
aks2_files.append(os.path.join("lib/aotriton.images", subpath, fn))
torch_package_data += aks2_files
if get_cmake_cache_vars()["USE_TENSORPIPE"]:
torch_package_data.extend(
[
"include/tensorpipe/*.h",
"include/tensorpipe/**/*.h",
]
)
if get_cmake_cache_vars()["USE_KINETO"]:
torch_package_data.extend(
[
"include/kineto/*.h",
"include/kineto/**/*.h",
]
)
torchgen_package_data = [
"packaged/*",
"packaged/**/*",
]
package_data = {
"torch": torch_package_data,
}
if not BUILD_LIBTORCH_WHL:
package_data["torchgen"] = torchgen_package_data
else:
# no extensions in BUILD_LIBTORCH_WHL mode
extensions = []
setup(
name=package_name,
version=version,
description=(
"Tensors and Dynamic neural networks in Python with strong GPU acceleration"
),
long_description=long_description,
long_description_content_type="text/markdown",
ext_modules=extensions,
cmdclass=cmdclass,
packages=packages,
entry_points=entry_points,
install_requires=install_requires,
extras_require=extras_require,
package_data=package_data,
# TODO fix later Manifest.IN file was previously ignored
include_package_data=False, # defaults to True with pyproject.toml file
url="https://pytorch.org/",
download_url="https://github.com/pytorch/pytorch/tags",
author="PyTorch Team",
author_email="[email protected]",
python_requires=f">={python_min_version_str}",
# PyPI package information.
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Mathematics",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
"Topic :: Software Development",
"Topic :: Software Development :: Libraries",
"Topic :: Software Development :: Libraries :: Python Modules",
"Programming Language :: C++",
"Programming Language :: Python :: 3",
]
+ [
f"Programming Language :: Python :: 3.{i}"
for i in range(python_min_version[1], version_range_max)
],
license="BSD-3-Clause",
keywords="pytorch, machine learning",
)
if EMIT_BUILD_WARNING:
print_box(build_update_message)
if __name__ == "__main__":
main()
|
https://github.com/huggingface/transformers
|
README.md
|
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-dark.svg">
<source media="(prefers-color-scheme: light)" srcset="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg">
<img alt="Hugging Face Transformers Library" src="https://huggingface.co/datasets/huggingface/documentation-images/raw/main/transformers-logo-light.svg" width="352" height="59" style="max-width: 100%;">
</picture>
<br/>
<br/>
</p>
<p align="center">
<a href="https://huggingface.com/models"><img alt="Checkpoints on Hub" src="https://img.shields.io/endpoint?url=https://huggingface.co/api/shields/models&color=brightgreen"></a>
<a href="https://circleci.com/gh/huggingface/transformers"><img alt="Build" src="https://img.shields.io/circleci/build/github/huggingface/transformers/main"></a>
<a href="https://github.com/huggingface/transformers/blob/main/LICENSE"><img alt="GitHub" src="https://img.shields.io/github/license/huggingface/transformers.svg?color=blue"></a>
<a href="https://huggingface.co/docs/transformers/index"><img alt="Documentation" src="https://img.shields.io/website/http/huggingface.co/docs/transformers/index.svg?down_color=red&down_message=offline&up_message=online"></a>
<a href="https://github.com/huggingface/transformers/releases"><img alt="GitHub release" src="https://img.shields.io/github/release/huggingface/transformers.svg"></a>
<a href="https://github.com/huggingface/transformers/blob/main/CODE_OF_CONDUCT.md"><img alt="Contributor Covenant" src="https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg"></a>
<a href="https://zenodo.org/badge/latestdoi/155220641"><img src="https://zenodo.org/badge/155220641.svg" alt="DOI"></a>
</p>
<h4 align="center">
<p>
<b>English</b> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hans.md">简体中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_zh-hant.md">繁體中文</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ko.md">한국어</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_es.md">Español</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ja.md">日本語</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_hd.md">हिन्दी</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ru.md">Русский</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_pt-br.md">Рortuguês</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_te.md">తెలుగు</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_fr.md">Français</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_de.md">Deutsch</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_vi.md">Tiếng Việt</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ar.md">العربية</a> |
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ur.md">اردو</a> |
</p>
</h4>
<h3 align="center">
<p>State-of-the-art pretrained models for inference and training</p>
</h3>
<h3 align="center">
<a href="https://hf.co/course"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/course_banner.png"></a>
</h3>
Transformers is a library of pretrained text, computer vision, audio, video, and multimodal models for inference and training. Use Transformers to fine-tune models on your data, build inference applications, and for generative AI use cases across multiple modalities.
There are over 500K+ Transformers [model checkpoints](https://huggingface.co/models?library=transformers&sort=trending) on the [Hugging Face Hub](https://huggingface.com/models) you can use.
Explore the [Hub](https://huggingface.com/) today to find a model and use Transformers to help you get started right away.
## Installation
Transformers works with Python 3.9+ [PyTorch](https://pytorch.org/get-started/locally/) 2.1+, [TensorFlow](https://www.tensorflow.org/install/pip) 2.6+, and [Flax](https://flax.readthedocs.io/en/latest/) 0.4.1+.
Create and activate a virtual environment with [venv](https://docs.python.org/3/library/venv.html) or [uv](https://docs.astral.sh/uv/), a fast Rust-based Python package and project manager.
```py
# venv
python -m venv .my-env
source .my-env/bin/activate
# uv
uv venv .my-env
source .my-env/bin/activate
```
Install Transformers in your virtual environment.
```py
# pip
pip install "transformers[torch]"
# uv
uv pip install "transformers[torch]"
```
Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, the *latest* version may not be stable. Feel free to open an [issue](https://github.com/huggingface/transformers/issues) if you encounter an error.
```shell
git clone https://github.com/huggingface/transformers.git
cd transformers
# pip
pip install .[torch]
# uv
uv pip install .[torch]
```
## Quickstart
Get started with Transformers right away with the [Pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial) API. The `Pipeline` is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output.
Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model.
```py
from transformers import pipeline
pipeline = pipeline(task="text-generation", model="Qwen/Qwen2.5-1.5B")
pipeline("the secret to baking a really good cake is ")
[{'generated_text': 'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}]
```
To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input to `Pipeline`) between you and the system.
> [!TIP]
> You can also chat with a model directly from the command line.
> ```shell
> transformers chat Qwen/Qwen2.5-0.5B-Instruct
> ```
```py
import torch
from transformers import pipeline
chat = [
{"role": "system", "content": "You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},
{"role": "user", "content": "Hey, can you tell me any fun things to do in New York?"}
]
pipeline = pipeline(task="text-generation", model="meta-llama/Meta-Llama-3-8B-Instruct", torch_dtype=torch.bfloat16, device_map="auto")
response = pipeline(chat, max_new_tokens=512)
print(response[0]["generated_text"][-1]["content"])
```
Expand the examples below to see how `Pipeline` works for different modalities and tasks.
<details>
<summary>Automatic speech recognition</summary>
```py
from transformers import pipeline
pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3")
pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac")
{'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
```
</details>
<details>
<summary>Image classification</summary>
<h3 align="center">
<a><img src="https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"></a>
</h3>
```py
from transformers import pipeline
pipeline = pipeline(task="image-classification", model="facebook/dinov2-small-imagenet1k-1-layer")
pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
[{'label': 'macaw', 'score': 0.997848391532898},
{'label': 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',
'score': 0.0016551691805943847},
{'label': 'lorikeet', 'score': 0.00018523589824326336},
{'label': 'African grey, African gray, Psittacus erithacus',
'score': 7.85409429227002e-05},
{'label': 'quail', 'score': 5.502637941390276e-05}]
```
</details>
<details>
<summary>Visual question answering</summary>
<h3 align="center">
<a><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg"></a>
</h3>
```py
from transformers import pipeline
pipeline = pipeline(task="visual-question-answering", model="Salesforce/blip-vqa-base")
pipeline(
image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg",
question="What is in the image?",
)
[{'answer': 'statue of liberty'}]
```
</details>
## Why should I use Transformers?
1. Easy-to-use state-of-the-art models:
- High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.
- Low barrier to entry for researchers, engineers, and developers.
- Few user-facing abstractions with just three classes to learn.
- A unified API for using all our pretrained models.
1. Lower compute costs, smaller carbon footprint:
- Share trained models instead of training from scratch.
- Reduce compute time and production costs.
- Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.
1. Choose the right framework for every part of a models lifetime:
- Train state-of-the-art models in 3 lines of code.
- Move a single model between PyTorch/JAX/TF2.0 frameworks at will.
- Pick the right framework for training, evaluation, and production.
1. Easily customize a model or an example to your needs:
- We provide examples for each architecture to reproduce the results published by its original authors.
- Model internals are exposed as consistently as possible.
- Model files can be used independently of the library for quick experiments.
<a target="_blank" href="https://huggingface.co/enterprise">
<img alt="Hugging Face Enterprise Hub" src="https://github.com/user-attachments/assets/247fb16d-d251-4583-96c4-d3d76dda4925">
</a><br>
## Why shouldn't I use Transformers?
- This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
- The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like [Accelerate](https://huggingface.co/docs/accelerate).
- The [example scripts]((https://github.com/huggingface/transformers/tree/main/examples)) are only *examples*. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.
## 100 projects using Transformers
Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and the
Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone
else to build their dream projects.
In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on the
community with the [awesome-transformers](./awesome-transformers.md) page which lists 100
incredible projects built with Transformers.
If you own or use a project that you believe should be part of the list, please open a PR to add it!
## Example models
You can test most of our models directly on their [Hub model pages](https://huggingface.co/models).
Expand each modality below to see a few example models for various use cases.
<details>
<summary>Audio</summary>
- Audio classification with [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo)
- Automatic speech recognition with [Moonshine](https://huggingface.co/UsefulSensors/moonshine)
- Keyword spotting with [Wav2Vec2](https://huggingface.co/superb/wav2vec2-base-superb-ks)
- Speech to speech generation with [Moshi](https://huggingface.co/kyutai/moshiko-pytorch-bf16)
- Text to audio with [MusicGen](https://huggingface.co/facebook/musicgen-large)
- Text to speech with [Bark](https://huggingface.co/suno/bark)
</details>
<details>
<summary>Computer vision</summary>
- Automatic mask generation with [SAM](https://huggingface.co/facebook/sam-vit-base)
- Depth estimation with [DepthPro](https://huggingface.co/apple/DepthPro-hf)
- Image classification with [DINO v2](https://huggingface.co/facebook/dinov2-base)
- Keypoint detection with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor)
- Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue)
- Object detection with [RT-DETRv2](https://huggingface.co/PekingU/rtdetr_v2_r50vd)
- Pose Estimation with [VitPose](https://huggingface.co/usyd-community/vitpose-base-simple)
- Universal segmentation with [OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_swin_large)
- Video classification with [VideoMAE](https://huggingface.co/MCG-NJU/videomae-large)
</details>
<details>
<summary>Multimodal</summary>
- Audio or text to text with [Qwen2-Audio](https://huggingface.co/Qwen/Qwen2-Audio-7B)
- Document question answering with [LayoutLMv3](https://huggingface.co/microsoft/layoutlmv3-base)
- Image or text to text with [Qwen-VL](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct)
- Image captioning [BLIP-2](https://huggingface.co/Salesforce/blip2-opt-2.7b)
- OCR-based document understanding with [GOT-OCR2](https://huggingface.co/stepfun-ai/GOT-OCR-2.0-hf)
- Table question answering with [TAPAS](https://huggingface.co/google/tapas-base)
- Unified multimodal understanding and generation with [Emu3](https://huggingface.co/BAAI/Emu3-Gen)
- Vision to text with [Llava-OneVision](https://huggingface.co/llava-hf/llava-onevision-qwen2-0.5b-ov-hf)
- Visual question answering with [Llava](https://huggingface.co/llava-hf/llava-1.5-7b-hf)
- Visual referring expression segmentation with [Kosmos-2](https://huggingface.co/microsoft/kosmos-2-patch14-224)
</details>
<details>
<summary>NLP</summary>
- Masked word completion with [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base)
- Named entity recognition with [Gemma](https://huggingface.co/google/gemma-2-2b)
- Question answering with [Mixtral](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)
- Summarization with [BART](https://huggingface.co/facebook/bart-large-cnn)
- Translation with [T5](https://huggingface.co/google-t5/t5-base)
- Text generation with [Llama](https://huggingface.co/meta-llama/Llama-3.2-1B)
- Text classification with [Qwen](https://huggingface.co/Qwen/Qwen2.5-0.5B)
</details>
## Citation
We now have a [paper](https://www.aclweb.org/anthology/2020.emnlp-demos.6/) you can cite for the 🤗 Transformers library:
```bibtex
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
```
|
https://github.com/huggingface/transformers
|
conftest.py
|
# Copyright 2020 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# tests directory-specific settings - this file is run automatically
# by pytest before any tests are run
import doctest
import sys
import warnings
from os.path import abspath, dirname, join
import _pytest
import pytest
from transformers.testing_utils import HfDoctestModule, HfDocTestParser
NOT_DEVICE_TESTS = {
"test_tokenization",
"test_processor",
"test_processing",
"test_beam_constraints",
"test_configuration_utils",
"test_data_collator",
"test_trainer_callback",
"test_trainer_utils",
"test_feature_extraction",
"test_image_processing",
"test_image_processor",
"test_image_transforms",
"test_optimization",
"test_retrieval",
"test_config",
"test_from_pretrained_no_checkpoint",
"test_keep_in_fp32_modules",
"test_gradient_checkpointing_backward_compatibility",
"test_gradient_checkpointing_enable_disable",
"test_torch_save_load",
"test_initialization",
"test_forward_signature",
"test_model_get_set_embeddings",
"test_model_main_input_name",
"test_correct_missing_keys",
"test_tie_model_weights",
"test_can_use_safetensors",
"test_load_save_without_tied_weights",
"test_tied_weights_keys",
"test_model_weights_reload_no_missing_tied_weights",
"test_mismatched_shapes_have_properly_initialized_weights",
"test_matched_shapes_have_loaded_weights_when_some_mismatched_shapes_exist",
"test_model_is_small",
"test_tf_from_pt_safetensors",
"test_flax_from_pt_safetensors",
"ModelTest::test_pipeline_", # None of the pipeline tests from PipelineTesterMixin (of which XxxModelTest inherits from) are running on device
"ModelTester::test_pipeline_",
"/repo_utils/",
"/utils/",
}
# allow having multiple repository checkouts and not needing to remember to rerun
# `pip install -e '.[dev]'` when switching between checkouts and running tests.
git_repo_path = abspath(join(dirname(__file__), "src"))
sys.path.insert(1, git_repo_path)
# silence FutureWarning warnings in tests since often we can't act on them until
# they become normal warnings - i.e. the tests still need to test the current functionality
warnings.simplefilter(action="ignore", category=FutureWarning)
def pytest_configure(config):
config.addinivalue_line("markers", "is_pipeline_test: mark test to run only when pipelines are tested")
config.addinivalue_line("markers", "is_staging_test: mark test to run only in the staging environment")
config.addinivalue_line("markers", "accelerate_tests: mark test that require accelerate")
config.addinivalue_line("markers", "not_device_test: mark the tests always running on cpu")
def pytest_collection_modifyitems(items):
for item in items:
if any(test_name in item.nodeid for test_name in NOT_DEVICE_TESTS):
item.add_marker(pytest.mark.not_device_test)
def pytest_addoption(parser):
from transformers.testing_utils import pytest_addoption_shared
pytest_addoption_shared(parser)
def pytest_terminal_summary(terminalreporter):
from transformers.testing_utils import pytest_terminal_summary_main
make_reports = terminalreporter.config.getoption("--make-reports")
if make_reports:
pytest_terminal_summary_main(terminalreporter, id=make_reports)
def pytest_sessionfinish(session, exitstatus):
# If no tests are collected, pytest exists with code 5, which makes the CI fail.
if exitstatus == 5:
session.exitstatus = 0
# Doctest custom flag to ignore output.
IGNORE_RESULT = doctest.register_optionflag("IGNORE_RESULT")
OutputChecker = doctest.OutputChecker
class CustomOutputChecker(OutputChecker):
def check_output(self, want, got, optionflags):
if IGNORE_RESULT & optionflags:
return True
return OutputChecker.check_output(self, want, got, optionflags)
doctest.OutputChecker = CustomOutputChecker
_pytest.doctest.DoctestModule = HfDoctestModule
doctest.DocTestParser = HfDocTestParser
|
https://github.com/huggingface/transformers
|
examples/3D_parallel.py
|
# Copyright 2024 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""":
This script is used to test training a model using Tensor Parallelism and Data Parallelism.
Usage:
export CUDA_VISIBLE_DEVICES=0,1,2,3
export CUDA_VISIBLE_DEVICES=4,5,6,7
export CUDA_VISIBLE_DEVICES=5,6,7
TP_SIZE=2 DP_SIZE=2 torchrun --nproc_per_node=4 --rdzv_endpoint=localhost:29503 examples/3D_parallel.py
CP_SIZE=2 DP_SIZE=2 torchrun --nproc_per_node=4 examples/3D_parallel.py
CP_SIZE=2 TP_SIZE=2 torchrun --nproc_per_node=4 examples/3D_parallel.py
DP_SIZE=2 CP_SIZE=2 TP_SIZE=2 torchrun --nproc_per_node=8 examples/3D_parallel.py
TP_SIZE=1 CP_SIZE=4 torchrun --nproc_per_node=4 examples/3D_parallel.py
TP_SIZE=1 DP_SIZE=4 torchrun --nproc_per_node=4 examples/3D_parallel.py
TP_SIZE=4 DP_SIZE=1 torchrun --nproc_per_node=4 --rdzv_endpoint=localhost:29503 examples/3D_parallel.py
IGNORE_SANITY=1 CP_SIZE=1 TP_SIZE=1 DP_SIZE=1 torchrun --nproc_per_node=1 --rdzv_endpoint=localhost:29504 examples/3D_parallel.py
ocalhost:29504 test_train.py
"""
import logging
import os
from collections.abc import Iterable
from contextlib import nullcontext
import torch
import torch.distributed as dist
import torch.distributed.checkpoint as dcp
import torch.optim as optim
import wandb
from datasets import load_dataset
from torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict
from torch.distributed.checkpoint.stateful import Stateful
from torch.distributed.device_mesh import DeviceMesh
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import ShardingStrategy
from torch.distributed.tensor import DTensor
from torch.distributed.tensor.experimental import context_parallel
from torch.nn.attention import SDPBackend, sdpa_kernel
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from transformers import AutoModelForCausalLM, AutoTokenizer
# torch.use_deterministic_algorithms(True)
torch.backends.cudnn.deterministic = True
# Set up logging
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
datefmt="%m/%d/%Y %H:%M:%S",
level=logging.INFO,
)
logger = logging.getLogger(__name__)
# from torch.distributed.tensor.experimental._attention import set_rotate_method
# set_rotate_method("alltoall") # CP rotate shards using all-to-all
def main():
tp_size = int(os.environ.get("TP_SIZE", 1))
dp_size = int(os.environ.get("DP_SIZE", 1))
cp_size = int(os.environ.get("CP_SIZE", 1)) # Add CP size configuration
sdpa_backend = SDPBackend.FLASH_ATTENTION # For CP
# sdpa_backend = SDPBackend.MATH # For CP
global_batch_size = 8 # Desired global batch size
seq_len = 1024 # Sequence length
num_train_steps = 10000 # Number of training steps
LR = 1e-5
model_name = "HuggingFaceTB/SmolLM2-1.7B"
# model_name = "unsloth/Llama-3.2-1B"
CHECKPOINT_DIR = f"checkpoint_tp{tp_size}_dp{dp_size}_cp{cp_size}"
# Initialize distributed environment
if "RANK" in os.environ and "WORLD_SIZE" in os.environ:
dist.init_process_group("nccl")
rank = dist.get_rank()
world_size = dist.get_world_size()
local_rank = int(os.environ["LOCAL_RANK"])
torch.cuda.set_device(local_rank)
assert world_size == tp_size * dp_size * cp_size, (
f"World size ({world_size}) must equal TP size ({tp_size}) * DP size ({dp_size}) * CP size ({cp_size})"
)
mesh = torch.arange(world_size).reshape(dp_size, tp_size, cp_size)
world_mesh = DeviceMesh(device_type="cuda", mesh=mesh, mesh_dim_names=("dp", "tp", "cp"))
tp_mesh = world_mesh["tp"]
dp_mesh = world_mesh["dp"]
cp_mesh = world_mesh["cp"]
world_mesh["dp", "cp"]._flatten(mesh_dim_name="dp_cp")
logger.info(f"Created DeviceMesh: {world_mesh}")
logger.info(
f"Distributed setup - Rank: {rank}, World size: {world_size}, Local rank: {local_rank}, DP: {dp_mesh.get_local_rank()}, TP: {tp_mesh.get_local_rank()}, CP: {cp_mesh.get_local_rank()}"
)
if dist.get_rank() == 0:
wandb.init(
project="tp_dp_test",
config={
"tp_size": tp_size,
"dp_size": dp_size,
"cp_size": cp_size,
"global_batch_size": global_batch_size,
"model_name": model_name,
"dataset": "roneneldan/TinyStories-1M",
"seq_len": seq_len,
"lr": LR,
"weight_decay": 0.1,
},
name=f"llama_tp{tp_size}_dp{dp_size}_cp{cp_size}"
if model_name == "unsloth/Llama-3.2-1B"
else f"tp{tp_size}_dp{dp_size}_cp{cp_size}",
)
logger.info("Wandb initialized.")
# Log the current file to wandb
wandb.save("test_train.py")
# Load model and tokenizer
logger.info(f"Loading model and tokenizer from {model_name}")
tokenizer = AutoTokenizer.from_pretrained(model_name)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
logger.info(f"Set pad_token to eos_token: {tokenizer.pad_token}")
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_mesh=tp_mesh if dist.is_initialized() else None,
tp_plan="auto",
torch_dtype=torch.bfloat16,
)
logger.info(f"Model loaded onto device mesh: {tp_mesh}")
device = torch.device(f"cuda:{local_rank}")
logger.info(f"Using device: {device} for non-model tensors")
use_ddp = False
if dist.is_initialized() and dp_mesh.size() > 1:
model = FSDP(model, device_mesh=dp_mesh, sharding_strategy=ShardingStrategy.NO_SHARD)
use_ddp = True
pass
model.train()
logger.info("Loading TinyStories dataset...")
raw_dataset = load_dataset("roneneldan/TinyStories", split="train[:1%]") # Use 1% for faster testing
def tokenize_function(examples):
# Tokenize the text without padding
tokenized_batch = tokenizer(
examples["text"], padding=False, truncation=True, max_length=seq_len, return_tensors=None
)
# Set labels to be the same as input_ids for Causal LM
tokenized_batch["labels"] = tokenized_batch["input_ids"].copy()
return tokenized_batch
tokenized_dataset = raw_dataset.map(tokenize_function, batched=True, remove_columns=["text"])
logger.info(f"Dataset loaded and tokenized. Size: {len(tokenized_dataset)}")
# Create packed sequences
def create_packed_sequences(examples):
# Flatten all sequences
all_tokens = []
for input_ids in examples["input_ids"]:
all_tokens.extend(input_ids)
# Split into sequences of seq_len + 1 (for input + label)
num_sequences = len(all_tokens) // (seq_len + 1)
packed_input_ids = []
packed_labels = []
for i in range(num_sequences):
start_idx = i * (seq_len + 1)
end_idx = start_idx + (seq_len + 1)
# Get the full sequence
full_sequence = all_tokens[start_idx:end_idx]
# For input_ids, remove the last token
packed_input_ids.append(full_sequence[:-1])
# For labels, remove the first token
packed_labels.append(full_sequence[1:])
return {"input_ids": packed_input_ids, "labels": packed_labels}
# Apply packing to the dataset
packed_dataset = tokenized_dataset.map(
create_packed_sequences,
batched=True,
remove_columns=tokenized_dataset.column_names,
batch_size=1000, # Process in batches for efficiency
num_proc=60,
)
logger.info(f"Dataset packed. New size: {len(packed_dataset)}")
# Shuffle the packed dataset
packed_dataset = packed_dataset.shuffle(seed=42)
logger.info("Packed dataset shuffled")
# Calculate local batch size
if dist.is_initialized():
assert global_batch_size % dp_mesh.size() == 0, (
f"Global batch size ({global_batch_size}) must be divisible by DP size ({dp_mesh.size()})"
)
local_batch_size = global_batch_size // dp_mesh.size()
else:
local_batch_size = global_batch_size
logger.info(
f"Global batch size: {global_batch_size}, DP size: {dp_size if dist.is_initialized() else 1}, Local batch size: {local_batch_size}"
)
# Simple collate function since sequences are already packed
def collate_fn(batch):
input_ids = torch.tensor([item["input_ids"] for item in batch], dtype=torch.long)
labels = torch.tensor([item["labels"] for item in batch], dtype=torch.long)
return {"input_ids": input_ids, "labels": labels}
if dist.is_initialized():
sampler = DistributedSampler(
packed_dataset, num_replicas=dp_mesh.size(), rank=dp_mesh.get_local_rank(), shuffle=False
)
else:
sampler = None
dataloader = DataLoader(
packed_dataset,
batch_size=local_batch_size,
sampler=sampler,
shuffle=False,
collate_fn=collate_fn,
pin_memory=True,
)
logger.info(f"DataLoader created. Distributed: {dist.is_initialized()}")
optimizer = optim.AdamW(model.parameters(), lr=LR, weight_decay=0.1)
# Training loop
logger.info(f"Starting training for {num_train_steps} steps...")
model.train()
step = 0
while step < num_train_steps:
for batch in dataloader:
if step >= num_train_steps:
break # Exit loop if max steps reached
# Move batch to appropriate device
batch = {k: v.to(device) for k, v in batch.items()}
optimizer.zero_grad()
# Add position_ids to batch before CP sharding
batch_size = batch["input_ids"].shape[0]
position_ids = torch.arange(0, seq_len, dtype=torch.long, device=device)
position_ids = position_ids.unsqueeze(0).expand(batch_size, -1)
batch["position_ids"] = position_ids
from torch.distributed.tensor.experimental._attention import _cp_options
_cp_options.enable_load_balance = False
with sdpa_kernel(sdpa_backend): # TODO: ideally move this to attention implementation
cp_context = (
nullcontext()
if cp_mesh.size() == 1
else context_parallel(
cp_mesh,
buffers=[
batch["input_ids"],
batch["labels"],
batch["position_ids"],
],
buffer_seq_dims=[1, 1, 1],
)
)
with cp_context:
# Pop labels from batch before model forward pass
labels = batch.pop("labels")
outputs = model(**batch) # [mbs, seq_len/cp]
loss = outputs.loss
logits = outputs.logits
# Compute loss with shifted labels
loss = model.loss_function(
logits=logits, labels=None, shift_labels=labels, vocab_size=model.config.vocab_size
)
loss.backward()
# all reduce grads across dp_cp if applicable
all_reduce_grads(model, world_mesh, use_ddp=use_ddp)
if hasattr(model, "clip_grad_norm_"):
gradnorm = model.clip_grad_norm_(max_norm=1.0, norm_type=2.0) # TODO: fix reported gradnorm
else:
# only works with FSDP's NO_SHARD otherwise we should use FSDP's clip_grad_norm_
assert len(list(model.parameters())) > 5, "No parameters found in model. Probably DDP bug.."
gradnorm = clip_grad_norm_(model.parameters(), max_norm=1.0, norm_type=2.0, foreach=True)
optimizer.step()
# allreduce loss across cp_dp before logging
if dist.is_initialized() and (cp_mesh.size() > 1 or dp_mesh.size() > 1):
dist.all_reduce(loss, group=world_mesh["dp_cp"].get_group(), op=dist.ReduceOp.AVG)
current_loss = loss.item()
# Log loss and gradnorm to wandb (only on rank 0 of dp group)
if not dist.is_initialized() or dist.get_rank() == 0:
logger.info(
f"Step: {step} | GBS: {global_batch_size} | DP: {dp_mesh.size()} | TP: {tp_mesh.size()} | CP: {cp_mesh.size()} | Loss: {current_loss} | Gradnorm: {gradnorm} | lr: {LR}"
)
wandb.log(
{
"train/loss": current_loss,
"train/gradnorm": gradnorm,
"step": step,
"lr": LR,
"GBS": global_batch_size,
}
)
step += 1 # Increment step count
logger.info("Training loop finished.")
# Save model using DCP (only if distributed)
if dist.is_initialized():
state_dict = {"app": AppState(model, optimizer)}
dcp.save(
state_dict=state_dict,
checkpoint_id=CHECKPOINT_DIR,
)
logger.info(f"Saved checkpoint to {CHECKPOINT_DIR}")
else:
# Fallback to regular save for non-distributed case
save_dir = "test_model_nondist"
model.save_pretrained(save_dir, safe_serialization=False)
tokenizer.save_pretrained(save_dir) # Save tokenizer too
logger.info(f"Saved model to {save_dir}")
dist.destroy_process_group()
logger.info("Cleaned up distributed process group")
# Finish wandb run on rank 0
if dist.get_rank() == 0:
wandb.finish()
logger.info("Wandb run finished.")
def all_reduce_grads(model, world_mesh, use_ddp):
"""All reduce gradients across dp_cp if applicable."""
cp_mesh = world_mesh["cp"]
if use_ddp:
# DDP/FSDP takes care of syncing grads
mesh = cp_mesh
else:
mesh = world_mesh["dp", "cp"]._flatten(mesh_dim_name="dp_cp")
if dist.is_initialized() and mesh.size() > 1:
for name, param in model.named_parameters():
if param.grad is not None:
# Workaround for cross-mesh communication limitation with DTensor gradients
if isinstance(param.grad, DTensor):
local_grad = param.grad.to_local()
# Ensure grad requires grad for inplace modification checks (might not be needed)
# local_grad = local_grad.detach().requires_grad_(True)
torch.distributed.all_reduce(local_grad, op=torch.distributed.ReduceOp.SUM, group=mesh.get_group())
local_grad = local_grad / mesh.size()
# Assign averaged grad back - need careful handling if DTensor structure is complex
# This simple assignment might work if the grad structure matches param structure
param.grad = DTensor.from_local(
local_grad, device_mesh=param.grad.device_mesh, placements=param.grad.placements
)
else:
# Handle regular tensors if any exist (e.g. buffers not converted to DTensor)
torch.distributed.all_reduce(param.grad, op=torch.distributed.ReduceOp.AVG, group=mesh.get_group())
class AppState(Stateful):
"""Wrapper for checkpointing the Application State including model and optimizer."""
def __init__(self, model, optimizer=None):
self.model = model
self.optimizer = optimizer
def state_dict(self):
model_state_dict, optimizer_state_dict = get_state_dict(self.model, self.optimizer)
return {"model": model_state_dict, "optim": optimizer_state_dict}
def load_state_dict(self, state_dict):
set_state_dict(
self.model, self.optimizer, model_state_dict=state_dict["model"], optim_state_dict=state_dict["optim"]
)
def clip_grad_norm_(
parameters: Iterable[torch.Tensor],
max_norm: float,
norm_type: float = 2.0,
error_if_nonfinite: bool = False,
foreach: bool | None = None,
) -> torch.Tensor:
"""
Clip the gradient norm of an iterable of parameters.
"""
# Filter out parameters with no gradients
parameters = [p for p in parameters if p.grad is not None]
assert len(parameters) > 0, "No parameters with gradients found"
# Calculate total norm
if norm_type == float("inf"):
total_norm = max(p.grad.detach().abs().max() for p in parameters)
else:
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type)
# Convert DTensor to local tensor if needed
if isinstance(total_norm, DTensor):
total_norm = total_norm.full_tensor()
# Clip gradients
clip_coef = max_norm / (total_norm + 1e-6)
if clip_coef < 1:
for p in parameters:
p.grad.detach().mul_(clip_coef)
return total_norm
if __name__ == "__main__":
main()
|
https://github.com/huggingface/transformers
|
examples/run_on_remote.py
|
#!/usr/bin/env python
# Copyright 2021 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import shlex
import runhouse as rh
if __name__ == "__main__":
# Refer to https://runhouse-docs.readthedocs-hosted.com/en/latest/api/python/cluster.html#hardware-setup for cloud access
# setup instructions, if using on-demand hardware
# If user passes --user <user> --host <host> --key_path <key_path> <example> <args>, fill them in as BYO cluster
# If user passes --instance <instance> --provider <provider> <example> <args>, fill them in as on-demand cluster
# Throw an error if user passes both BYO and on-demand cluster args
# Otherwise, use default values
parser = argparse.ArgumentParser()
parser.add_argument("--user", type=str, default="ubuntu")
parser.add_argument("--host", type=str, default="localhost")
parser.add_argument("--key_path", type=str, default=None)
parser.add_argument("--instance", type=str, default="V100:1")
parser.add_argument("--provider", type=str, default="cheapest")
parser.add_argument("--use_spot", type=bool, default=False)
parser.add_argument("--example", type=str, default="pytorch/text-generation/run_generation.py")
args, unknown = parser.parse_known_args()
if args.host != "localhost":
if args.instance != "V100:1" or args.provider != "cheapest":
raise ValueError("Cannot specify both BYO and on-demand cluster args")
cluster = rh.cluster(
name="rh-cluster", ips=[args.host], ssh_creds={"ssh_user": args.user, "ssh_private_key": args.key_path}
)
else:
cluster = rh.cluster(
name="rh-cluster", instance_type=args.instance, provider=args.provider, use_spot=args.use_spot
)
example_dir = args.example.rsplit("/", 1)[0]
# Set up remote environment
cluster.install_packages(["pip:./"]) # Installs transformers from local source
# Note transformers is copied into the home directory on the remote machine, so we can install from there
cluster.run([f"pip install -r transformers/examples/{example_dir}/requirements.txt"])
cluster.run(["pip install torch --upgrade --extra-index-url https://download.pytorch.org/whl/cu117"])
# Run example. You can bypass the CLI wrapper and paste your own code here.
cluster.run([f"python transformers/examples/{args.example} {shlex.join(unknown)}"])
# Alternatively, we can just import and run a training function (especially if there's no wrapper CLI):
# from my_script... import train
# reqs = ['pip:./', 'torch', 'datasets', 'accelerate', 'evaluate', 'tqdm', 'scipy', 'scikit-learn', 'tensorboard']
# launch_train_gpu = rh.function(fn=train,
# system=gpu,
# reqs=reqs,
# name='train_bert_glue')
#
# We can pass in arguments just like we would to a function:
# launch_train_gpu(num_epochs = 3, lr = 2e-5, seed = 42, batch_size = 16
# stream_logs=True)
|
https://github.com/huggingface/transformers
|
setup.py
|
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Simple check list from AllenNLP repo: https://github.com/allenai/allennlp/blob/main/setup.py
To create the package for pypi.
1. Create the release branch named: v<RELEASE>-release, for example v4.19-release. For a patch release checkout the
current release branch.
If releasing on a special branch, copy the updated README.md on the main branch for the commit you will make
for the post-release and run `make fix-copies` on the main branch as well.
2. Run `make pre-release` (or `make pre-patch` for a patch release) and commit these changes with the message:
"Release: <VERSION>" and push.
3. Go back to the main branch and run `make post-release` then `make fix-copies`. Commit these changes with the
message "v<NEXT_VERSION>.dev.0" and push to main.
# If you were just cutting the branch in preparation for a release, you can stop here for now.
4. Wait for the tests on the release branch to be completed and be green (otherwise revert and fix bugs)
5. On the release branch, add a tag in git to mark the release: "git tag v<VERSION> -m 'Adds tag v<VERSION> for pypi' "
Push the tag to git: git push --tags origin v<RELEASE>-release
6. Build both the sources and the wheel. Do not change anything in setup.py between
creating the wheel and the source distribution (obviously).
Run `make build-release`. This will build the release and do some sanity checks for you. If this ends with an error
message, you need to fix things before going further.
You should now have a /dist directory with both .whl and .tar.gz source versions.
7. Check that everything looks correct by uploading the package to the pypi test server:
twine upload dist/* -r testpypi
(pypi suggest using twine as other methods upload files via plaintext.)
You may have to specify the repository url, use the following command then:
twine upload dist/* -r testpypi --repository-url=https://test.pypi.org/legacy/
Check that you can install it in a virtualenv by running:
pip install -i https://testpypi.python.org/pypi transformers
Check you can run the following commands:
python -c "from transformers import pipeline; classifier = pipeline('text-classification'); print(classifier('What a nice release'))"
python -c "from transformers import *"
python utils/check_build.py --check_lib
If making a patch release, double check the bug you are patching is indeed resolved.
8. Upload the final version to actual pypi:
twine upload dist/* -r pypi
9. Copy the release notes from RELEASE.md to the tag in github once everything is looking hunky-dory.
"""
import os
import re
import shutil
from pathlib import Path
from setuptools import Command, find_packages, setup
# Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466
stale_egg_info = Path(__file__).parent / "transformers.egg-info"
if stale_egg_info.exists():
print(
(
"Warning: {} exists.\n\n"
"If you recently updated transformers to 3.0 or later, this is expected,\n"
"but it may prevent transformers from installing in editable mode.\n\n"
"This directory is automatically generated by Python's packaging tools.\n"
"I will remove it now.\n\n"
"See https://github.com/pypa/pip/issues/5466 for details.\n"
).format(stale_egg_info)
)
shutil.rmtree(stale_egg_info)
# IMPORTANT:
# 1. all dependencies should be listed here with their version requirements if any
# 2. once modified, run: `make deps_table_update` to update src/transformers/dependency_versions_table.py
_deps = [
"Pillow>=10.0.1,<=15.0",
"accelerate>=0.26.0",
"av",
"beautifulsoup4",
"blobfile",
"codecarbon>=2.8.1",
"cookiecutter==1.7.3",
"dataclasses",
"datasets!=2.5.0",
"deepspeed>=0.9.3",
"diffusers",
"dill<0.3.5",
"evaluate>=0.2.0",
"faiss-cpu",
"fastapi",
"filelock",
"flax>=0.4.1,<=0.7.0",
"ftfy",
"fugashi>=1.0",
"GitPython<3.1.19",
"hf-doc-builder>=0.3.0",
"hf_xet",
"huggingface-hub>=0.30.0,<1.0",
"importlib_metadata",
"ipadic>=1.0.0,<2.0",
"jax>=0.4.1,<=0.4.13",
"jaxlib>=0.4.1,<=0.4.13",
"jieba",
"jinja2>=3.1.0",
"kenlm",
# Keras pin - this is to make sure Keras 3 doesn't destroy us. Remove or change when we have proper support.
"keras>2.9,<2.16",
"keras-nlp>=0.3.1,<0.14.0", # keras-nlp 0.14 doesn't support keras 2, see pin on keras.
"kernels>=0.4.4,<0.5",
"librosa",
"natten>=0.14.6,<0.15.0",
"nltk<=3.8.1",
"num2words",
"numpy>=1.17",
"onnxconverter-common",
"onnxruntime-tools>=1.4.2",
"onnxruntime>=1.4.0",
"opencv-python",
"optimum-benchmark>=0.3.0",
"optuna",
"optax>=0.0.8,<=0.1.4",
"pandas<2.3.0", # `datasets` requires `pandas` while `pandas==2.3.0` has issues with CircleCI on 2025/06/05
"packaging>=20.0",
"parameterized",
"phonemizer",
"protobuf",
"psutil",
"pyyaml>=5.1",
"pydantic",
"pytest>=7.2.0",
"pytest-asyncio",
"pytest-rerunfailures",
"pytest-timeout",
"pytest-xdist",
"pytest-order",
"python>=3.9.0",
"ray[tune]>=2.7.0",
"regex!=2019.12.17",
"requests",
"rhoknp>=1.1.0,<1.3.1",
"rjieba",
"rouge-score!=0.0.7,!=0.0.8,!=0.1,!=0.1.1",
"ruff==0.11.2",
# `sacrebleu` not used in `transformers`. However, it is needed in several tests, when a test calls
# `evaluate.load("sacrebleu")`. This metric is used in the examples that we use to test the `Trainer` with, in the
# `Trainer` tests (see references to `run_translation.py`).
"sacrebleu>=1.4.12,<2.0.0",
"sacremoses",
"safetensors>=0.4.3",
"sagemaker>=2.31.0",
"schedulefree>=1.2.6",
"scikit-learn",
"scipy<1.13.0", # SciPy >= 1.13.0 is not supported with the current jax pin (`jax>=0.4.1,<=0.4.13`)
"sentencepiece>=0.1.91,!=0.1.92",
"sigopt",
"starlette",
"sudachipy>=0.6.6",
"sudachidict_core>=20220729",
"tensorboard",
# TensorFlow pin. When changing this value, update examples/tensorflow/_tests_requirements.txt accordingly
"tensorflow-cpu>2.9,<2.16",
"tensorflow>2.9,<2.16",
"tensorflow-text<2.16",
"tensorflow-probability<0.24",
"tf2onnx",
"timeout-decorator",
"tiktoken",
"timm<=1.0.11",
"tokenizers>=0.21,<0.22",
"torch>=2.1,<2.7", # Installing torch 2.7 results in slower compiled LLMs. Pinned while we investigate.
"torchaudio",
"torchvision",
"pyctcdecode>=0.4.0",
"tqdm>=4.27",
"unidic>=1.0.2",
"unidic_lite>=1.0.7",
"urllib3<2.0.0",
"uvicorn",
"pytest-rich",
"libcst",
"rich",
"opentelemetry-api",
"opentelemetry-exporter-otlp",
"opentelemetry-sdk",
]
# this is a lookup table with items like:
#
# tokenizers: "tokenizers==0.9.4"
# packaging: "packaging"
#
# some of the values are versioned whereas others aren't.
deps = {b: a for a, b in (re.findall(r"^(([^!=<>~ ]+)(?:[!=<>~ ].*)?$)", x)[0] for x in _deps)}
# since we save this data in src/transformers/dependency_versions_table.py it can be easily accessed from
# anywhere. If you need to quickly access the data from this table in a shell, you can do so easily with:
#
# python -c 'import sys; from transformers.dependency_versions_table import deps; \
# print(" ".join([ deps[x] for x in sys.argv[1:]]))' tokenizers datasets
#
# Just pass the desired package names to that script as it's shown with 2 packages above.
#
# If transformers is not yet installed and the work is done from the cloned repo remember to add `PYTHONPATH=src` to the script above
#
# You can then feed this for example to `pip`:
#
# pip install -U $(python -c 'import sys; from transformers.dependency_versions_table import deps; \
# print(" ".join([deps[x] for x in sys.argv[1:]]))' tokenizers datasets)
#
def deps_list(*pkgs):
return [deps[pkg] for pkg in pkgs]
class DepsTableUpdateCommand(Command):
"""
A custom distutils command that updates the dependency table.
usage: python setup.py deps_table_update
"""
description = "build runtime dependency table"
user_options = [
# format: (long option, short option, description).
("dep-table-update", None, "updates src/transformers/dependency_versions_table.py"),
]
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
content = [
"# THIS FILE HAS BEEN AUTOGENERATED. To update:",
"# 1. modify the `_deps` dict in setup.py",
"# 2. run `make deps_table_update``",
"deps = {",
entries,
"}",
"",
]
target = "src/transformers/dependency_versions_table.py"
print(f"updating {target}")
with open(target, "w", encoding="utf-8", newline="\n") as f:
f.write("\n".join(content))
extras = {}
extras["ja"] = deps_list("fugashi", "ipadic", "unidic_lite", "unidic", "sudachipy", "sudachidict_core", "rhoknp")
extras["sklearn"] = deps_list("scikit-learn")
extras["tf"] = deps_list("tensorflow", "onnxconverter-common", "tf2onnx", "tensorflow-text", "keras-nlp")
extras["tf-cpu"] = deps_list(
"keras",
"tensorflow-cpu",
"onnxconverter-common",
"tf2onnx",
"tensorflow-text",
"keras-nlp",
"tensorflow-probability",
)
extras["torch"] = deps_list("torch", "accelerate")
extras["accelerate"] = deps_list("accelerate")
extras["hf_xet"] = deps_list("hf_xet")
if os.name == "nt": # windows
extras["retrieval"] = deps_list("datasets") # faiss is not supported on windows
extras["flax"] = [] # jax is not supported on windows
else:
extras["retrieval"] = deps_list("faiss-cpu", "datasets")
extras["flax"] = deps_list("jax", "jaxlib", "flax", "optax", "scipy")
extras["tokenizers"] = deps_list("tokenizers")
extras["ftfy"] = deps_list("ftfy")
extras["onnxruntime"] = deps_list("onnxruntime", "onnxruntime-tools")
extras["onnx"] = deps_list("onnxconverter-common", "tf2onnx") + extras["onnxruntime"]
extras["modelcreation"] = deps_list("cookiecutter")
extras["sagemaker"] = deps_list("sagemaker")
extras["deepspeed"] = deps_list("deepspeed") + extras["accelerate"]
extras["optuna"] = deps_list("optuna")
extras["ray"] = deps_list("ray[tune]")
extras["sigopt"] = deps_list("sigopt")
extras["hub-kernels"] = deps_list("kernels")
extras["integrations"] = extras["hub-kernels"] + extras["optuna"] + extras["ray"] + extras["sigopt"]
extras["serving"] = deps_list("pydantic", "uvicorn", "fastapi", "starlette")
extras["audio"] = deps_list(
"librosa",
"pyctcdecode",
"phonemizer",
"kenlm",
)
# `pip install ".[speech]"` is deprecated and `pip install ".[torch-speech]"` should be used instead
extras["speech"] = deps_list("torchaudio") + extras["audio"]
extras["torch-speech"] = deps_list("torchaudio") + extras["audio"]
extras["tf-speech"] = extras["audio"]
extras["flax-speech"] = extras["audio"]
extras["vision"] = deps_list("Pillow")
extras["timm"] = deps_list("timm")
extras["torch-vision"] = deps_list("torchvision") + extras["vision"]
extras["natten"] = deps_list("natten")
extras["codecarbon"] = deps_list("codecarbon")
extras["video"] = deps_list("av")
extras["num2words"] = deps_list("num2words")
extras["sentencepiece"] = deps_list("sentencepiece", "protobuf")
extras["tiktoken"] = deps_list("tiktoken", "blobfile")
extras["testing"] = (
deps_list(
"pytest",
"pytest-asyncio",
"pytest-rich",
"pytest-xdist",
"pytest-order",
"pytest-rerunfailures",
"timeout-decorator",
"parameterized",
"psutil",
"datasets",
"dill",
"evaluate",
"pytest-timeout",
"ruff",
"rouge-score",
"nltk",
"GitPython",
"sacremoses",
"rjieba",
"beautifulsoup4",
"tensorboard",
"pydantic",
"sentencepiece",
"sacrebleu", # needed in trainer tests, see references to `run_translation.py`
)
+ extras["retrieval"]
+ extras["modelcreation"]
)
extras["deepspeed-testing"] = extras["deepspeed"] + extras["testing"] + extras["optuna"] + extras["sentencepiece"]
extras["ruff"] = deps_list("ruff")
extras["quality"] = deps_list("datasets", "ruff", "GitPython", "urllib3", "libcst", "rich", "pandas")
extras["all"] = (
extras["tf"]
+ extras["torch"]
+ extras["flax"]
+ extras["sentencepiece"]
+ extras["tokenizers"]
+ extras["torch-speech"]
+ extras["vision"]
+ extras["integrations"]
+ extras["timm"]
+ extras["torch-vision"]
+ extras["codecarbon"]
+ extras["accelerate"]
+ extras["video"]
+ extras["num2words"]
)
extras["dev-torch"] = (
extras["testing"]
+ extras["torch"]
+ extras["sentencepiece"]
+ extras["tokenizers"]
+ extras["torch-speech"]
+ extras["vision"]
+ extras["integrations"]
+ extras["timm"]
+ extras["torch-vision"]
+ extras["codecarbon"]
+ extras["quality"]
+ extras["ja"]
+ extras["sklearn"]
+ extras["modelcreation"]
+ extras["onnxruntime"]
+ extras["num2words"]
)
extras["dev-tensorflow"] = (
extras["testing"]
+ extras["tf"]
+ extras["sentencepiece"]
+ extras["tokenizers"]
+ extras["vision"]
+ extras["quality"]
+ extras["sklearn"]
+ extras["modelcreation"]
+ extras["onnx"]
+ extras["tf-speech"]
)
extras["dev"] = (
extras["all"] + extras["testing"] + extras["quality"] + extras["ja"] + extras["sklearn"] + extras["modelcreation"]
)
extras["torchhub"] = deps_list(
"filelock",
"huggingface-hub",
"importlib_metadata",
"numpy",
"packaging",
"protobuf",
"regex",
"requests",
"sentencepiece",
"torch",
"tokenizers",
"tqdm",
)
extras["benchmark"] = deps_list("optimum-benchmark")
# OpenTelemetry dependencies for metrics collection in continuous batching
extras["open-telemetry"] = deps_list("opentelemetry-api", "opentelemetry-exporter-otlp", "opentelemetry-sdk")
# when modifying the following list, make sure to update src/transformers/dependency_versions_check.py
install_requires = [
deps["filelock"], # filesystem locks, e.g., to prevent parallel downloads
deps["huggingface-hub"],
deps["numpy"],
deps["packaging"], # utilities from PyPA to e.g., compare versions
deps["pyyaml"], # used for the model cards metadata
deps["regex"], # for OpenAI GPT
deps["requests"], # for downloading models over HTTPS
deps["tokenizers"],
deps["safetensors"],
deps["tqdm"], # progress bars in model download and training scripts
]
setup(
name="transformers",
version="4.53.0.dev0", # expected format is one of x.y.z.dev0, or x.y.z.rc1 or x.y.z (no to dashes, yes to dots)
author="The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)",
author_email="[email protected]",
description="State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow",
long_description=open("README.md", "r", encoding="utf-8").read(),
long_description_content_type="text/markdown",
keywords="NLP vision speech deep learning transformer pytorch tensorflow jax BERT GPT-2 Wav2Vec2 ViT",
license="Apache 2.0 License",
url="https://github.com/huggingface/transformers",
package_dir={"": "src"},
packages=find_packages("src"),
include_package_data=True,
package_data={"": ["**/*.cu", "**/*.cpp", "**/*.cuh", "**/*.h", "**/*.pyx", "py.typed"]},
zip_safe=False,
extras_require=extras,
entry_points={
"console_scripts": [
"transformers=transformers.commands.transformers_cli:main",
"transformers-cli=transformers.commands.transformers_cli:main_cli",
]
},
python_requires=">=3.9.0",
install_requires=list(install_requires),
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
],
cmdclass={"deps_table_update": DepsTableUpdateCommand},
)
extras["tests_torch"] = deps_list()
extras["tests_tf"] = deps_list()
extras["tests_flax"] = deps_list()
extras["tests_hub"] = deps_list()
extras["tests_pipelines_torch"] = deps_list()
extras["tests_pipelines_tf"] = deps_list()
extras["tests_onnx"] = deps_list()
extras["tests_examples_torch"] = deps_list()
extras["tests_examples_tf"] = deps_list()
extras["tests_custom_tokenizers"] = deps_list()
extras["tests_exotic_models"] = deps_list()
extras["consistency"] = deps_list()
|
https://github.com/pyg-team/pytorch_geometric
|
README.md
|
<p align="center">
<img height="150" src="https://raw.githubusercontent.com/pyg-team/pyg_sphinx_theme/master/pyg_sphinx_theme/static/img/pyg_logo_text.svg?sanitize=true" />
</p>
______________________________________________________________________
[![PyPI Version][pypi-image]][pypi-url]
[![Testing Status][testing-image]][testing-url]
[![Linting Status][linting-image]][linting-url]
[![Docs Status][docs-image]][docs-url]
[![Contributing][contributing-image]][contributing-url]
[![Slack][slack-image]][slack-url]
**[Documentation](https://pytorch-geometric.readthedocs.io)** | **[Paper](https://arxiv.org/abs/1903.02428)** | **[Colab Notebooks and Video Tutorials](https://pytorch-geometric.readthedocs.io/en/latest/get_started/colabs.html)** | **[External Resources](https://pytorch-geometric.readthedocs.io/en/latest/external/resources.html)** | **[OGB Examples](https://github.com/snap-stanford/ogb/tree/master/examples)**
**PyG** *(PyTorch Geometric)* is a library built upon [PyTorch](https://pytorch.org/) to easily write and train Graph Neural Networks (GNNs) for a wide range of applications related to structured data.
It consists of various methods for deep learning on graphs and other irregular structures, also known as *[geometric deep learning](http://geometricdeeplearning.com/)*, from a variety of published papers.
In addition, it consists of easy-to-use mini-batch loaders for operating on many small and single giant graphs, [multi GPU-support](https://github.com/pyg-team/pytorch_geometric/tree/master/examples/multi_gpu), [`torch.compile`](https://pytorch-geometric.readthedocs.io/en/latest/advanced/compile.html) support, [`DataPipe`](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/datapipe.py) support, a large number of common benchmark datasets (based on simple interfaces to create your own), and helpful transforms, both for learning on arbitrary graphs as well as on 3D meshes or point clouds.
**[Click here to join our Slack community!][slack-url]**
<p align="center">
<a href="https://medium.com/stanford-cs224w"><img style="max-width: 941px" src="https://data.pyg.org/img/cs224w_tutorials.png" /></a>
</p>
______________________________________________________________________
- [Library Highlights](#library-highlights)
- [Quick Tour for New Users](#quick-tour-for-new-users)
- [Architecture Overview](#architecture-overview)
- [Implemented GNN Models](#implemented-gnn-models)
- [Installation](#installation)
## Library Highlights
Whether you are a machine learning researcher or first-time user of machine learning toolkits, here are some reasons to try out PyG for machine learning on graph-structured data.
- **Easy-to-use and unified API**:
All it takes is 10-20 lines of code to get started with training a GNN model (see the next section for a [quick tour](#quick-tour-for-new-users)).
PyG is *PyTorch-on-the-rocks*: It utilizes a tensor-centric API and keeps design principles close to vanilla PyTorch.
If you are already familiar with PyTorch, utilizing PyG is straightforward.
- **Comprehensive and well-maintained GNN models**:
Most of the state-of-the-art Graph Neural Network architectures have been implemented by library developers or authors of research papers and are ready to be applied.
- **Great flexibility**:
Existing PyG models can easily be extended for conducting your own research with GNNs.
Making modifications to existing models or creating new architectures is simple, thanks to its easy-to-use message passing API, and a variety of operators and utility functions.
- **Large-scale real-world GNN models**:
We focus on the need of GNN applications in challenging real-world scenarios, and support learning on diverse types of graphs, including but not limited to: scalable GNNs for graphs with millions of nodes; dynamic GNNs for node predictions over time; heterogeneous GNNs with multiple node types and edge types.
## Quick Tour for New Users
In this quick tour, we highlight the ease of creating and training a GNN model with only a few lines of code.
### Train your own GNN model
In the first glimpse of PyG, we implement the training of a GNN for classifying papers in a citation graph.
For this, we load the [Cora](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.datasets.Planetoid.html) dataset, and create a simple 2-layer GCN model using the pre-defined [`GCNConv`](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCNConv.html):
```python
import torch
from torch import Tensor
from torch_geometric.nn import GCNConv
from torch_geometric.datasets import Planetoid
dataset = Planetoid(root='.', name='Cora')
class GCN(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv2 = GCNConv(hidden_channels, out_channels)
def forward(self, x: Tensor, edge_index: Tensor) -> Tensor:
# x: Node feature matrix of shape [num_nodes, in_channels]
# edge_index: Graph connectivity matrix of shape [2, num_edges]
x = self.conv1(x, edge_index).relu()
x = self.conv2(x, edge_index)
return x
model = GCN(dataset.num_features, 16, dataset.num_classes)
```
<details>
<summary>We can now optimize the model in a training loop, similar to the <a href="https://pytorch.org/tutorials/beginner/basics/optimization_tutorial.html#full-implementation">standard PyTorch training procedure</a>.</summary>
```python
import torch.nn.functional as F
data = dataset[0]
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(200):
pred = model(data.x, data.edge_index)
loss = F.cross_entropy(pred[data.train_mask], data.y[data.train_mask])
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
</details>
More information about evaluating final model performance can be found in the corresponding [example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py).
### Create your own GNN layer
In addition to the easy application of existing GNNs, PyG makes it simple to implement custom Graph Neural Networks (see [here](https://pytorch-geometric.readthedocs.io/en/latest/tutorial/create_gnn.html) for the accompanying tutorial).
For example, this is all it takes to implement the [edge convolutional layer](https://arxiv.org/abs/1801.07829) from Wang *et al.*:
$$x_i^{\\prime} ~ = ~ \\max\_{j \\in \\mathcal{N}(i)} ~ \\textrm{MLP}\_{\\theta} \\left( [ ~ x_i, ~ x_j - x_i ~ ] \\right)$$
```python
import torch
from torch import Tensor
from torch.nn import Sequential, Linear, ReLU
from torch_geometric.nn import MessagePassing
class EdgeConv(MessagePassing):
def __init__(self, in_channels, out_channels):
super().__init__(aggr="max") # "Max" aggregation.
self.mlp = Sequential(
Linear(2 * in_channels, out_channels),
ReLU(),
Linear(out_channels, out_channels),
)
def forward(self, x: Tensor, edge_index: Tensor) -> Tensor:
# x: Node feature matrix of shape [num_nodes, in_channels]
# edge_index: Graph connectivity matrix of shape [2, num_edges]
return self.propagate(edge_index, x=x) # shape [num_nodes, out_channels]
def message(self, x_j: Tensor, x_i: Tensor) -> Tensor:
# x_j: Source node features of shape [num_edges, in_channels]
# x_i: Target node features of shape [num_edges, in_channels]
edge_features = torch.cat([x_i, x_j - x_i], dim=-1)
return self.mlp(edge_features) # shape [num_edges, out_channels]
```
## Architecture Overview
PyG provides a multi-layer framework that enables users to build Graph Neural Network solutions on both low and high levels.
It comprises of the following components:
- The PyG **engine** utilizes the powerful PyTorch deep learning framework with full [`torch.compile`](https://pytorch-geometric.readthedocs.io/en/latest/advanced/compile.html) and [TorchScript](https://pytorch-geometric.readthedocs.io/en/latest/advanced/jit.html) support, as well as additions of efficient CPU/CUDA libraries for operating on sparse data, *e.g.*, [`pyg-lib`](https://github.com/pyg-team/pyg-lib).
- The PyG **storage** handles data processing, transformation and loading pipelines. It is capable of handling and processing large-scale graph datasets, and provides effective solutions for heterogeneous graphs. It further provides a variety of sampling solutions, which enable training of GNNs on large-scale graphs.
- The PyG **operators** bundle essential functionalities for implementing Graph Neural Networks. PyG supports important GNN building blocks that can be combined and applied to various parts of a GNN model, ensuring rich flexibility of GNN design.
- Finally, PyG provides an abundant set of GNN **models**, and examples that showcase GNN models on standard graph benchmarks. Thanks to its flexibility, users can easily build and modify custom GNN models to fit their specific needs.
<p align="center">
<img width="100%" src="https://raw.githubusercontent.com/pyg-team/pytorch_geometric/master/docs/source/_figures/architecture.svg?sanitize=true" />
</p>
## Implemented GNN Models
We list currently supported PyG models, layers and operators according to category:
**GNN layers:**
All Graph Neural Network layers are implemented via the **[`nn.MessagePassing`](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.MessagePassing.html)** interface.
A GNN layer specifies how to perform message passing, *i.e.* by designing different message, aggregation and update functions as defined [here](https://pytorch-geometric.readthedocs.io/en/latest/tutorial/create_gnn.html).
These GNN layers can be stacked together to create Graph Neural Network models.
- **[GCNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCNConv.html)** from Kipf and Welling: [Semi-Supervised Classification with Graph Convolutional Networks](https://arxiv.org/abs/1609.02907) (ICLR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py)\]
- **[ChebConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ChebConv.html)** from Defferrard *et al.*: [Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering](https://arxiv.org/abs/1606.09375) (NIPS 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py#L36-L37)\]
- **[GATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GATConv.html)** from Veličković *et al.*: [Graph Attention Networks](https://arxiv.org/abs/1710.10903) (ICLR 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gat.py)\]
<details>
<summary><b>Expand to see all implemented GNN layers...</b></summary>
- **[GCN2Conv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GCN2Conv.html)** from Chen *et al.*: [Simple and Deep Graph Convolutional Networks](https://arxiv.org/abs/2007.02133) (ICML 2020) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_cora.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn2_ppi.py)\]
- **[SplineConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SplineConv.html)** from Fey *et al.*: [SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels](https://arxiv.org/abs/1711.08920) (CVPR 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/cora.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/faust.py)\]
- **[NNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.NNConv.html)** from Gilmer *et al.*: [Neural Message Passing for Quantum Chemistry](https://arxiv.org/abs/1704.01212) (ICML 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_nn_conv.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_nn_conv.py)\]
- **[CGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.CGConv.html)** from Xie and Grossman: [Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties](https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.145301) (Physical Review Letters 120, 2018)
- **[ECConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ECConv.html)** from Simonovsky and Komodakis: [Edge-Conditioned Convolution on Graphs](https://arxiv.org/abs/1704.02901) (CVPR 2017)
- **[EGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EGConv.html)** from Tailor *et al.*: [Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions](https://arxiv.org/abs/2104.01481) (GNNSys 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/egc.py)\]
- **[GATv2Conv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GATv2Conv.html)** from Brody *et al.*: [How Attentive are Graph Attention Networks?](https://arxiv.org/abs/2105.14491) (ICLR 2022)
- **[TransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.TransformerConv.html)** from Shi *et al.*: [Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification](https://arxiv.org/abs/2009.03509) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/unimp_arxiv.py)\]
- **[SAGEConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SAGEConv.html)** from Hamilton *et al.*: [Inductive Representation Learning on Large Graphs](https://arxiv.org/abs/1706.02216) (NIPS 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/reddit.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/ogbn_train.py), [**Example3**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_sage_unsup.py), [**Example4**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_sage_unsup_ppi.py)\]
- **[GraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GraphConv.html)** from, *e.g.*, Morris *et al.*: [Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks](https://arxiv.org/abs/1810.02244) (AAAI 2019)
- **[GatedGraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GatedGraphConv.html)** from Li *et al.*: [Gated Graph Sequence Neural Networks](https://arxiv.org/abs/1511.05493) (ICLR 2016)
- **[ResGatedGraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ResGatedGraphConv.html)** from Bresson and Laurent: [Residual Gated Graph ConvNets](https://arxiv.org/abs/1711.07553) (CoRR 2017)
- **[GINConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GINConv.html)** from Xu *et al.*: [How Powerful are Graph Neural Networks?](https://arxiv.org/abs/1810.00826) (ICLR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mutag_gin.py)\]
- **[GINEConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GINEConv.html)** from Hu *et al.*: [Strategies for Pre-training Graph Neural Networks](https://arxiv.org/abs/1905.12265) (ICLR 2020)
- **[ARMAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.ARMAConv.html)** from Bianchi *et al.*: [Graph Neural Networks with Convolutional ARMA Filters](https://arxiv.org/abs/1901.01343) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/arma.py)\]
- **[SGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SGConv.html)** from Wu *et al.*: [Simplifying Graph Convolutional Networks](https://arxiv.org/abs/1902.07153) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/sgc.py)\]
- **[APPNP](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.APPNP.html)** from Klicpera *et al.*: [Predict then Propagate: Graph Neural Networks meet Personalized PageRank](https://arxiv.org/abs/1810.05997) (ICLR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/citation/appnp.py)\]
- **[MFConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.MFConv.html)** from Duvenaud *et al.*: [Convolutional Networks on Graphs for Learning Molecular Fingerprints](https://arxiv.org/abs/1509.09292) (NIPS 2015)
- **[AGNNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.AGNNConv.html)** from Thekumparampil *et al.*: [Attention-based Graph Neural Network for Semi-Supervised Learning](https://arxiv.org/abs/1803.03735) (CoRR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/agnn.py)\]
- **[TAGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.TAGConv.html)** from Du *et al.*: [Topology Adaptive Graph Convolutional Networks](https://arxiv.org/abs/1710.10370) (CoRR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/tagcn.py)\]
- **[PNAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PNAConv.html)** from Corso *et al.*: [Principal Neighbourhood Aggregation for Graph Nets](https://arxiv.org/abs/2004.05718) (CoRR 2020) \[**[Example](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/pna.py)**\]
- **[FAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FAConv.html)** from Bo *et al.*: [Beyond Low-Frequency Information in Graph Convolutional Networks](https://arxiv.org/abs/2101.00797) (AAAI 2021)
- **[PDNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.nn.conv.PDNConv.html)** from Rozemberczki *et al.*: [Pathfinder Discovery Networks for Neural Message Passing](https://arxiv.org/abs/2010.12878) (WWW 2021)
- **[RGCNConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.RGCNConv.html)** from Schlichtkrull *et al.*: [Modeling Relational Data with Graph Convolutional Networks](https://arxiv.org/abs/1703.06103) (ESWC 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rgcn.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rgcn_link_pred.py)\]
- **[RGATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.RGATConv.html)** from Busbridge *et al.*: [Relational Graph Attention Networks](https://arxiv.org/abs/1904.05811) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rgat.py)\]
- **[FiLMConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FiLMConv.html)** from Brockschmidt: [GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation](https://arxiv.org/abs/1906.12192) (ICML 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/film.py)\]
- **[SignedConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SignedConv.html)** from Derr *et al.*: [Signed Graph Convolutional Network](https://arxiv.org/abs/1808.06354) (ICDM 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/signed_gcn.py)\]
- **[DNAConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.DNAConv.html)** from Fey: [Just Jump: Dynamic Neighborhood Aggregation in Graph Neural Networks](https://arxiv.org/abs/1904.04849) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/dna.py)\]
- **[PANConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PANConv.html)** from Ma *et al.*: [Path Integral Based Convolution and Pooling for Graph Neural Networks](https://arxiv.org/abs/2006.16811) (NeurIPS 2020)
- **[PointNetConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PointNetConv.html)** (including **[Iterative Farthest Point Sampling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.fps.html)**, dynamic graph generation based on **[nearest neighbor](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.knn_graph.html)** or **[maximum distance](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.radius_graph.html)**, and **[k-NN interpolation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.unpool.knn_interpolate.html)** for upsampling) from Qi *et al.*: [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https://arxiv.org/abs/1612.00593) (CVPR 2017) and [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](https://arxiv.org/abs/1706.02413) (NIPS 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/pointnet2_classification.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/pointnet2_segmentation.py)\]
- **[EdgeConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.EdgeConv.html)** from Wang *et al.*: [Dynamic Graph CNN for Learning on Point Clouds](https://arxiv.org/abs/1801.07829) (CoRR, 2018) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/dgcnn_classification.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/dgcnn_segmentation.py)\]
- **[XConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.XConv.html)** from Li *et al.*: [PointCNN: Convolution On X-Transformed Points](https://arxiv.org/abs/1801.07791) (NeurIPS 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/points/point_cnn.py)\]
- **[PPFConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PPFConv.html)** from Deng *et al.*: [PPFNet: Global Context Aware Local Features for Robust 3D Point Matching](https://arxiv.org/abs/1802.02669) (CVPR 2018)
- **[GMMConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GMMConv.html)** from Monti *et al.*: [Geometric Deep Learning on Graphs and Manifolds using Mixture Model CNNs](https://arxiv.org/abs/1611.08402) (CVPR 2017)
- **[FeaStConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FeaStConv.html)** from Verma *et al.*: [FeaStNet: Feature-Steered Graph Convolutions for 3D Shape Analysis](https://arxiv.org/abs/1706.05206) (CVPR 2018)
- **[PointTransformerConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.PointTransformerConv.html)** from Zhao *et al.*: [Point Transformer](https://arxiv.org/abs/2012.09164) (2020)
- **[HypergraphConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.HypergraphConv.html)** from Bai *et al.*: [Hypergraph Convolution and Hypergraph Attention](https://arxiv.org/abs/1901.08150) (CoRR 2019)
- **[GravNetConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GravNetConv.html)** from Qasim *et al.*: [Learning Representations of Irregular Particle-detector Geometry with Distance-weighted Graph Networks](https://arxiv.org/abs/1902.07987) (European Physics Journal C, 2019)
- **[SuperGAT](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SuperGATConv.html)** from Kim and Oh: [How To Find Your Friendly Neighborhood: Graph Attention Design With Self-Supervision](https://openreview.net/forum?id=Wi5KUNlqWty) (ICLR 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/super_gat.py)\]
- **[HGTConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.HGTConv.html)** from Hu *et al.*: [Heterogeneous Graph Transformer](https://arxiv.org/abs/2003.01332) (WWW 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/hgt_dblp.py)\]
- **[HEATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.HEATonv.html)** from Mo *et al.*: [Heterogeneous Edge-Enhanced Graph Attention Network For Multi-Agent Trajectory Prediction](https://arxiv.org/abs/2106.07161) (CoRR 2021)
- **[SSGConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SSGConv.html)** from Zhu *et al.*: [Simple Spectral Graph Convolution](https://openreview.net/forum?id=CYO5T-YjWZV) (ICLR 2021)
- **[FusedGATConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.FusedGATConv.html)** from Zhang *et al.*: [Understanding GNN Computational Graph: A Coordinated Computation, IO, and Memory Perspective](https://proceedings.mlsys.org/paper/2022/file/9a1158154dfa42caddbd0694a4e9bdc8-Paper.pdf) (MLSys 2022)
- **[GPSConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GPSConv.html)** from Rampášek *et al.*: [Recipe for a General, Powerful, Scalable Graph Transformer](https://arxiv.org/abs/2205.12454) (NeurIPS 2022) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_gps.py)\]
</details>
**Pooling layers:**
Graph pooling layers combine the vectorial representations of a set of nodes in a graph (or a subgraph) into a single vector representation that summarizes its properties of nodes.
It is commonly applied to graph-level tasks, which require combining node features into a single graph representation.
- **[Top-K Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.TopKPooling.html)** from Gao and Ji: [Graph U-Nets](https://arxiv.org/abs/1905.05178) (ICML 2019), Cangea *et al.*: [Towards Sparse Hierarchical Graph Classifiers](https://arxiv.org/abs/1811.01287) (NeurIPS-W 2018) and Knyazev *et al.*: [Understanding Attention and Generalization in Graph Neural Networks](https://arxiv.org/abs/1905.02850) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_topk_pool.py)\]
- **[DiffPool](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.dense_diff_pool.html)** from Ying *et al.*: [Hierarchical Graph Representation Learning with Differentiable Pooling](https://arxiv.org/abs/1806.08804) (NeurIPS 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_diff_pool.py)\]
<details>
<summary><b>Expand to see all implemented pooling layers...</b></summary>
- **[Attentional Aggregation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.AttentionalAggregation.html)** from Li *et al.*: [Graph Matching Networks for Learning the Similarity of Graph Structured Objects](https://arxiv.org/abs/1904.12787) (ICML 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/global_attention.py)\]
- **[Set2Set](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.Set2Set.html)** from Vinyals *et al.*: [Order Matters: Sequence to Sequence for Sets](https://arxiv.org/abs/1511.06391) (ICLR 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/set2set.py)\]
- **[Sort Aggregation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.SortAggregation.html)** from Zhang *et al.*: [An End-to-End Deep Learning Architecture for Graph Classification](https://www.cse.wustl.edu/~muhan/papers/AAAI_2018_DGCNN.pdf) (AAAI 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/sort_pool.py)\]
- **[MinCut Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.dense_mincut_pool.html)** from Bianchi *et al.*: [Spectral Clustering with Graph Neural Networks for Graph Pooling](https://arxiv.org/abs/1907.00481) (ICML 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_mincut_pool.py)\]
- **[DMoN Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.dense.DMoNPooling.html)** from Tsitsulin *et al.*: [Graph Clustering with Graph Neural Networks](https://arxiv.org/abs/2006.16904) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_dmon_pool.py)\]
- **[Graclus Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.graclus.html)** from Dhillon *et al.*: [Weighted Graph Cuts without Eigenvectors: A Multilevel Approach](http://www.cs.utexas.edu/users/inderjit/public_papers/multilevel_pami.pdf) (PAMI 2007) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_graclus.py)\]
- **[Voxel Grid Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.voxel_grid.html)** from, *e.g.*, Simonovsky and Komodakis: [Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs](https://arxiv.org/abs/1704.02901) (CVPR 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mnist_voxel_grid.py)\]
- **[SAG Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.SAGPooling.html)** from Lee *et al.*: [Self-Attention Graph Pooling](https://arxiv.org/abs/1904.08082) (ICML 2019) and Knyazev *et al.*: [Understanding Attention and Generalization in Graph Neural Networks](https://arxiv.org/abs/1905.02850) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/sag_pool.py)\]
- **[Edge Pooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.EdgePooling.html)** from Diehl *et al.*: [Towards Graph Pooling by Edge Contraction](https://graphreason.github.io/papers/17.pdf) (ICML-W 2019) and Diehl: [Edge Contraction Pooling for Graph Neural Networks](https://arxiv.org/abs/1905.10990) (CoRR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/edge_pool.py)\]
- **[ASAPooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.ASAPooling.html)** from Ranjan *et al.*: [ASAP: Adaptive Structure Aware Pooling for Learning Hierarchical Graph Representations](https://arxiv.org/abs/1911.07979) (AAAI 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/asap.py)\]
- **[PANPooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.PANPooling.html)** from Ma *et al.*: [Path Integral Based Convolution and Pooling for Graph Neural Networks](https://arxiv.org/abs/2006.16811) (NeurIPS 2020)
- **[MemPooling](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.pool.MemPooling.html)** from Khasahmadi *et al.*: [Memory-Based Graph Networks](https://arxiv.org/abs/2002.09518) (ICLR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/mem_pool.py)\]
- **[Graph Multiset Transformer](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.GraphMultisetTransformer.html)** from Baek *et al.*: [Accurate Learning of Graph Representations with Graph Multiset Pooling](https://arxiv.org/abs/2102.11533) (ICLR 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/proteins_gmt.py)\]
- **[Equilibrium Aggregation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.aggr.EquilibriumAggregation.html)** from Bartunov *et al.*: [](https://arxiv.org/abs/2202.12795) (UAI 2022) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/equilibrium_median.py)\]
</details>
**GNN models:**
Our supported GNN models incorporate multiple message passing layers, and users can directly use these pre-defined models to make predictions on graphs.
Unlike simple stacking of GNN layers, these models could involve pre-processing, additional learnable parameters, skip connections, graph coarsening, etc.
- **[SchNet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.SchNet.html)** from Schütt *et al.*: [SchNet: A Continuous-filter Convolutional Neural Network for Modeling Quantum Interactions](https://arxiv.org/abs/1706.08566) (NIPS 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_pretrained_schnet.py)\]
- **[DimeNet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DimeNet.html)** and **[DimeNetPlusPlus](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DimeNetPlusPlus.html)** from Klicpera *et al.*: [Directional Message Passing for Molecular Graphs](https://arxiv.org/abs/2003.03123) (ICLR 2020) and [Fast and Uncertainty-Aware Directional Message Passing for Non-Equilibrium Molecules](https://arxiv.org/abs/2011.14115) (NeurIPS-W 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/qm9_pretrained_dimenet.py)\]
- **[Node2Vec](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.Node2Vec.html)** from Grover and Leskovec: [node2vec: Scalable Feature Learning for Networks](https://arxiv.org/abs/1607.00653) (KDD 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/node2vec.py)\]
- **[Deep Graph Infomax](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DeepGraphInfomax.html)** from Veličković *et al.*: [Deep Graph Infomax](https://arxiv.org/abs/1809.10341) (ICLR 2019) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/infomax_transductive.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/infomax_inductive.py)\]
- **Deep Multiplex Graph Infomax** from Park *et al.*: [Unsupervised Attributed Multiplex Network Embedding](https://arxiv.org/abs/1911.06750) (AAAI 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/dmgi_unsup.py)\]
- **[Masked Label Prediction](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.MaskLabel.html)** from Shi *et al.*: [Masked Label Prediction: Unified Message Passing Model for Semi-Supervised Classification](https://arxiv.org/abs/2009.03509) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/unimp_arxiv.py)\]
- **[PMLP](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.PMLP.html)** from Yang *et al.*: [Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs](https://arxiv.org/abs/2212.09034) (ICLR 2023)
<details>
<summary><b>Expand to see all implemented GNN models...</b></summary>
- **[Jumping Knowledge](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.JumpingKnowledge.html)** from Xu *et al.*: [Representation Learning on Graphs with Jumping Knowledge Networks](https://arxiv.org/abs/1806.03536) (ICML 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/benchmark/kernel/gin.py#L54-L106)\]
- A **[MetaLayer](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.MetaLayer.html)** for building any kind of graph network similar to the [TensorFlow Graph Nets library](https://github.com/deepmind/graph_nets) from Battaglia *et al.*: [Relational Inductive Biases, Deep Learning, and Graph Networks](https://arxiv.org/abs/1806.01261) (CoRR 2018)
- **[MetaPath2Vec](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.MetaPath2Vec.html)** from Dong *et al.*: [metapath2vec: Scalable Representation Learning for Heterogeneous Networks](https://ericdongyx.github.io/papers/KDD17-dong-chawla-swami-metapath2vec.pdf) (KDD 2017) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/metapath2vec.py)\]
- All variants of **[Graph Autoencoders](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.GAE.html)** and **[Variational Autoencoders](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.VGAE.html)** from:
- [Variational Graph Auto-Encoders](https://arxiv.org/abs/1611.07308) from Kipf and Welling (NIPS-W 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/autoencoder.py)\]
- [Adversarially Regularized Graph Autoencoder for Graph Embedding](https://arxiv.org/abs/1802.04407) from Pan *et al.* (IJCAI 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/argva_node_clustering.py)\]
- [Simple and Effective Graph Autoencoders with One-Hop Linear Models](https://arxiv.org/abs/2001.07614) from Salha *et al.* (ECML 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/autoencoder.py)\]
- **[SEAL](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/seal_link_pred.py)** from Zhang and Chen: [Link Prediction Based on Graph Neural Networks](https://arxiv.org/pdf/1802.09691.pdf) (NeurIPS 2018) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/seal_link_pred.py)\]
- **[RENet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.RENet.html)** from Jin *et al.*: [Recurrent Event Network for Reasoning over Temporal Knowledge Graphs](https://arxiv.org/abs/1904.05530) (ICLR-W 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/renet.py)\]
- **[GraphUNet](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.GraphUNet.html)** from Gao and Ji: [Graph U-Nets](https://arxiv.org/abs/1905.05178) (ICML 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_unet.py)\]
- **[AttentiveFP](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.AttentiveFP.html)** from Xiong *et al.*: [Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism](https://pubs.acs.org/doi/10.1021/acs.jmedchem.9b00959) (J. Med. Chem. 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/attentive_fp.py)\]
- **[DeepGCN](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.DeepGCNLayer.html)** and the **[GENConv](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.GENConv.html)** from Li *et al.*: [DeepGCNs: Can GCNs Go as Deep as CNNs?](https://arxiv.org/abs/1904.03751) (ICCV 2019) and [DeeperGCN: All You Need to Train Deeper GCNs](https://arxiv.org/abs/2006.07739) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/ogbn_proteins_deepgcn.py)\]
- **[RECT](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.RECT_L.html)** from Wang *et al.*: [Network Embedding with Completely-imbalanced Labels](https://ieeexplore.ieee.org/document/8979355) (TKDE 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rect.py)\]
- **[GNNExplainer](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.explain.algorithm.GNNExplainer.html)** from Ying *et al.*: [GNNExplainer: Generating Explanations for Graph Neural Networks](https://arxiv.org/abs/1903.03894) (NeurIPS 2019) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/explain/gnn_explainer.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/explain/gnn_explainer_ba_shapes.py), [**Example3**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/explain/gnn_explainer_link_pred.py)\]
- **Graph-less Neural Networks** from Zhang *et al.*: [Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation](https://arxiv.org/abs/2110.08727) (CoRR 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/glnn.py)\]
- **[LINKX](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.LINKX.html)** from Lim *et al.*: [Large Scale Learning on Non-Homophilous Graphs:
New Benchmarks and Strong Simple Methods](https://arxiv.org/abs/2110.14446) (NeurIPS 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/linkx.py)\]
- **[RevGNN](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.GroupAddRev.html)** from Li *et al.*: [Training Graph Neural with 1000 Layers](https://arxiv.org/abs/2106.07476) (ICML 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/rev_gnn.py)\]
- **[TransE](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.kge.TransE.html)** from Bordes *et al.*: [Translating Embeddings for Modeling Multi-Relational Data](https://proceedings.neurips.cc/paper/2013/file/1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf) (NIPS 2013) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/kge_fb15k_237.py)\]
- **[ComplEx](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.kge.ComplEx.html)** from Trouillon *et al.*: [Complex Embeddings for Simple Link Prediction](https://arxiv.org/abs/1606.06357) (ICML 2016) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/kge_fb15k_237.py)\]
- **[DistMult](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.kge.DistMult.html)** from Yang *et al.*: [Embedding Entities and Relations for Learning and Inference in Knowledge Bases](https://arxiv.org/abs/1412.6575) (ICLR 2015) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/kge_fb15k_237.py)\]
- **[RotatE](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.kge.RotatE.html)** from Sun *et al.*: [RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space](https://arxiv.org/abs/1902.10197) (ICLR 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/kge_fb15k_237.py)\]
</details>
**GNN operators and utilities:**
PyG comes with a rich set of neural network operators that are commonly used in many GNN models.
They follow an extensible design: It is easy to apply these operators and graph utilities to existing GNN layers and models to further enhance model performance.
- **[DropEdge](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.dropout_edge)** from Rong *et al.*: [DropEdge: Towards Deep Graph Convolutional Networks on Node Classification](https://openreview.net/forum?id=Hkx1qkrKPr) (ICLR 2020)
- **[DropNode](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.dropout_node)**, **[MaskFeature](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.mask_feature)** and **[AddRandomEdge](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.add_random_edge)** from You *et al.*: [Graph Contrastive Learning with Augmentations](https://arxiv.org/abs/2010.13902) (NeurIPS 2020)
- **[DropPath](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.dropout_path)** from Li *et al.*: [MaskGAE: Masked Graph Modeling Meets Graph Autoencoders](https://arxiv.org/abs/2205.10053) (arXiv 2022)
- **[ShuffleNode](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.shuffle_node)** from Veličković *et al.*: [Deep Graph Infomax](https://arxiv.org/abs/1809.10341) (ICLR 2019)
- **[GraphNorm](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.norm.GraphNorm.html)** from Cai *et al.*: [GraphNorm: A Principled Approach to Accelerating Graph Neural Network Training](https://proceedings.mlr.press/v139/cai21e.html) (ICML 2021)
- **[GDC](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.transforms.GDC.html)** from Klicpera *et al.*: [Diffusion Improves Graph Learning](https://arxiv.org/abs/1911.05485) (NeurIPS 2019) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/gcn.py)\]
<details>
<summary><b>Expand to see all implemented GNN operators and utilities...</b></summary>
- **[GraphSizeNorm](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.norm.GraphSizeNorm.html)** from Dwivedi *et al.*: [Benchmarking Graph Neural Networks](https://arxiv.org/abs/2003.00982) (CoRR 2020)
- **[PairNorm](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.norm.PairNorm.html)** from Zhao and Akoglu: [PairNorm: Tackling Oversmoothing in GNNs](https://arxiv.org/abs/1909.12223) (ICLR 2020)
- **[MeanSubtractionNorm](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.norm.MeanSubtractionNorm.html)** from Yang *et al.*: [Revisiting "Over-smoothing" in Deep GCNs](https://arxiv.org/abs/2003.13663) (CoRR 2020)
- **[DiffGroupNorm](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.norm.DiffGroupNorm.html)** from Zhou *et al.*: [Towards Deeper Graph Neural Networks with Differentiable Group Normalization](https://arxiv.org/abs/2006.06972) (NeurIPS 2020)
- **[Tree Decomposition](https://pytorch-geometric.readthedocs.io/en/latest/modules/utils.html#torch_geometric.utils.tree_decomposition)** from Jin *et al.*: [Junction Tree Variational Autoencoder for Molecular Graph Generation](https://arxiv.org/abs/1802.04364) (ICML 2018)
- **[TGN](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.TGNMemory.html)** from Rossi *et al.*: [Temporal Graph Networks for Deep Learning on Dynamic Graphs](https://arxiv.org/abs/2006.10637) (GRL+ 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/tgn.py)\]
- **[Weisfeiler Lehman Operator](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.WLConv.html)** from Weisfeiler and Lehman: [A Reduction of a Graph to a Canonical Form and an Algebra Arising During this Reduction](https://www.iti.zcu.cz/wl2018/pdf/wl_paper_translation.pdf) (Nauchno-Technicheskaya Informatsia 1968) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/wl_kernel.py)\]
- **[Continuous Weisfeiler Lehman Operator](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.WLConvContinuous.html)** from Togninalli *et al.*: [Wasserstein Weisfeiler-Lehman Graph Kernels](https://arxiv.org/abs/1906.01277) (NeurIPS 2019)
- **[Label Propagation](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.LabelPropagation.html)** from Zhu and Ghahramani: [Learning from Labeled and Unlabeled Data with Label Propagation](http://mlg.eng.cam.ac.uk/zoubin/papers/CMU-CALD-02-107.pdf) (CMU-CALD 2002) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/label_prop.py)\]
- **[Local Degree Profile](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.transforms.LocalDegreeProfile)** from Cai and Wang: [A Simple yet Effective Baseline for Non-attribute Graph Classification](https://arxiv.org/abs/1811.03508) (CoRR 2018)
- **[CorrectAndSmooth](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.models.CorrectAndSmooth.html)** from Huang *et al.*: [Combining Label Propagation And Simple Models Out-performs Graph Neural Networks](https://arxiv.org/abs/2010.13993) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/correct_and_smooth.py)\]
- **[Gini](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.functional.gini.html)** and **[BRO](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.functional.bro.html)** regularization from Henderson *et al.*: [Improving Molecular Graph Neural Network Explainability with Orthonormalization and Induced Sparsity](https://arxiv.org/abs/2105.04854) (ICML 2021)
- **[RootedEgoNets](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.transforms.RootedEgoNets)** and **[RootedRWSubgraph](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.transforms.RootedRWSubgraph)** from Zhao *et al.*: [From Stars to Subgraphs: Uplifting Any GNN with Local Structure Awareness](https://arxiv.org/abs/2110.03753) (ICLR 2022)
- **[FeaturePropagation](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.transforms.FeaturePropagation)** from Rossi *et al.*: [On the Unreasonable Effectiveness of Feature Propagation in Learning on Graphs with Missing Node Features](https://arxiv.org/abs/2111.12128) (CoRR 2021)
</details>
**Scalable GNNs:**
PyG supports the implementation of Graph Neural Networks that can scale to large-scale graphs.
Such application is challenging since the entire graph, its associated features and the GNN parameters cannot fit into GPU memory.
Many state-of-the-art scalability approaches tackle this challenge by sampling neighborhoods for mini-batch training, graph clustering and partitioning, or by using simplified GNN models.
These approaches have been implemented in PyG, and can benefit from the above GNN layers, operators and models.
- **[NeighborLoader](https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html#torch_geometric.loader.NeighborLoader)** from Hamilton *et al.*: [Inductive Representation Learning on Large Graphs](https://arxiv.org/abs/1706.02216) (NIPS 2017) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/reddit.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/ogbn_train.py), [**Example3**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/to_hetero_mag.py)\]
- **[ClusterGCN](https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html#torch_geometric.loader.ClusterLoader)** from Chiang *et al.*: [Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks](https://arxiv.org/abs/1905.07953) (KDD 2019) \[[**Example1**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/cluster_gcn_reddit.py), [**Example2**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/cluster_gcn_ppi.py)\]
- **[GraphSAINT](https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html#torch_geometric.loader.GraphSAINTSampler)** from Zeng *et al.*: [GraphSAINT: Graph Sampling Based Inductive Learning Method](https://arxiv.org/abs/1907.04931) (ICLR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/graph_saint.py)\]
<details>
<summary><b>Expand to see all implemented scalable GNNs...</b></summary>
- **[ShaDow](https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html#torch_geometric.loader.ShaDowKHopSampler)** from Zeng *et al.*: [Decoupling the Depth and Scope of Graph Neural Networks](https://arxiv.org/abs/2201.07858) (NeurIPS 2021) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/shadow.py)\]
- **[SIGN](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.transforms.SIGN.html)** from Rossi *et al.*: [SIGN: Scalable Inception Graph Neural Networks](https://arxiv.org/abs/2004.11198) (CoRR 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/sign.py)\]
- **[HGTLoader](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.loader.HGTLoader.html)** from Hu *et al.*: [Heterogeneous Graph Transformer](https://arxiv.org/abs/2003.01332) (WWW 2020) \[[**Example**](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/hetero/to_hetero_mag.py)\]
</details>
## Installation
PyG is available for Python 3.9 to Python 3.13.
From **PyG 2.3** onwards, you can install and use PyG **without any external library** required except for PyTorch.
For this, simply run
```
pip install torch_geometric
```
### Additional Libraries
If you want to utilize the full set of features from PyG, there exists several additional libraries you may want to install:
- **[`pyg-lib`](https://github.com/pyg-team/pyg-lib)**: Heterogeneous GNN operators and graph sampling routines
- **[`torch-scatter`](https://github.com/rusty1s/pytorch_scatter)**: Accelerated and efficient sparse reductions
- **[`torch-sparse`](https://github.com/rusty1s/pytorch_sparse)**: [`SparseTensor`](https://pytorch-geometric.readthedocs.io/en/latest/advanced/sparse_tensor.html) support
- **[`torch-cluster`](https://github.com/rusty1s/pytorch_cluster)**: Graph clustering routines
- **[`torch-spline-conv`](https://github.com/rusty1s/pytorch_spline_conv)**: [`SplineConv`](https://pytorch-geometric.readthedocs.io/en/latest/generated/torch_geometric.nn.conv.SplineConv.html) support
These packages come with their own CPU and GPU kernel implementations based on the [PyTorch C++/CUDA/hip(ROCm) extension interface](https://github.com/pytorch/extension-cpp).
For a basic usage of PyG, these dependencies are **fully optional**.
We recommend to start with a minimal installation, and install additional dependencies once you start to actually need them.
For ease of installation of these extensions, we provide `pip` wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).
#### PyTorch 2.7
To install the binaries for PyTorch 2.7.0, simply run
```
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.7.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu118`, `cu126`, or `cu128` depending on your PyTorch installation.
| | `cpu` | `cu118` | `cu126` | `cu128` |
| ----------- | ----- | ------- | ------- | ------- |
| **Linux** | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ✅ | ✅ | ✅ |
| **macOS** | ✅ | | | |
#### PyTorch 2.6
To install the binaries for PyTorch 2.6.0, simply run
```
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.6.0+${CUDA}.html
```
where `${CUDA}` should be replaced by either `cpu`, `cu118`, `cu124`, or `cu126` depending on your PyTorch installation.
| | `cpu` | `cu118` | `cu124` | `cu126` |
| ----------- | ----- | ------- | ------- | ------- |
| **Linux** | ✅ | ✅ | ✅ | ✅ |
| **Windows** | ✅ | ✅ | ✅ | ✅ |
| **macOS** | ✅ | | | |
**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, PyTorch 2.3.0/2.3.1, PyTorch 2.4.0/2.4.1, and PyTorch 2.5.0/2.5.1 (following the same procedure).
**For older versions, you might need to explicitly specify the latest supported version number** or install via `pip install --no-index` in order to prevent a manual installation from source.
You can look up the latest supported version number [here](https://data.pyg.org/whl).
### NVIDIA PyG Container
NVIDIA provides a PyG docker container for effortlessly training and deploying GPU accelerated GNNs with PyG, see [here](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pyg).
### Nightly and Master
In case you want to experiment with the latest PyG features which are not fully released yet, either install the **nightly version** of PyG via
```
pip install pyg-nightly
```
or install PyG **from master** via
```
pip install git+https://github.com/pyg-team/pytorch_geometric.git
```
### ROCm Wheels
The external [`pyg-rocm-build` repository](https://github.com/Looong01/pyg-rocm-build) provides wheels and detailed instructions on how to install PyG for ROCm.
If you have any questions about it, please open an issue [here](https://github.com/Looong01/pyg-rocm-build/issues).
## Cite
Please cite [our paper](https://arxiv.org/abs/1903.02428) (and the respective papers of the methods used) if you use this code in your own work:
```
@inproceedings{Fey/Lenssen/2019,
title={Fast Graph Representation Learning with {PyTorch Geometric}},
author={Fey, Matthias and Lenssen, Jan E.},
booktitle={ICLR Workshop on Representation Learning on Graphs and Manifolds},
year={2019},
}
```
Feel free to [email us](mailto:[email protected]) if you wish your work to be listed in the [external resources](https://pytorch-geometric.readthedocs.io/en/latest/external/resources.html).
If you notice anything unexpected, please open an [issue](https://github.com/pyg-team/pytorch_geometric/issues) and let us know.
If you have any questions or are missing a specific feature, feel free [to discuss them with us](https://github.com/pyg-team/pytorch_geometric/discussions).
We are motivated to constantly make PyG even better.
[contributing-image]: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat
[contributing-url]: https://github.com/pyg-team/pytorch_geometric/blob/master/.github/CONTRIBUTING.md
[docs-image]: https://readthedocs.org/projects/pytorch-geometric/badge/?version=latest
[docs-url]: https://pytorch-geometric.readthedocs.io/en/latest
[linting-image]: https://github.com/pyg-team/pytorch_geometric/actions/workflows/linting.yml/badge.svg
[linting-url]: https://github.com/pyg-team/pytorch_geometric/actions/workflows/linting.yml
[pypi-image]: https://badge.fury.io/py/torch-geometric.svg
[pypi-url]: https://pypi.python.org/pypi/torch-geometric
[slack-image]: https://img.shields.io/badge/slack-pyg-brightgreen
[slack-url]: https://data.pyg.org/slack.html
[testing-image]: https://github.com/pyg-team/pytorch_geometric/actions/workflows/testing.yml/badge.svg
[testing-url]: https://github.com/pyg-team/pytorch_geometric/actions/workflows/testing.yml
|
https://github.com/pyg-team/pytorch_geometric
|
examples/agnn.py
|
import os.path as osp
import torch
import torch.nn.functional as F
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import AGNNConv
dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
data = dataset[0]
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin1 = torch.nn.Linear(dataset.num_features, 16)
self.prop1 = AGNNConv(requires_grad=False)
self.prop2 = AGNNConv(requires_grad=True)
self.lin2 = torch.nn.Linear(16, dataset.num_classes)
def forward(self):
x = F.dropout(data.x, training=self.training)
x = F.relu(self.lin1(x))
x = self.prop1(x, data.edge_index)
x = self.prop2(x, data.edge_index)
x = F.dropout(x, training=self.training)
x = self.lin2(x)
return F.log_softmax(x, dim=1)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model, data = Net().to(device), data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
def train():
model.train()
optimizer.zero_grad()
F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
optimizer.step()
@torch.no_grad()
def test():
model.eval()
out, accs = model(), []
for _, mask in data('train_mask', 'val_mask', 'test_mask'):
pred = out[mask].argmax(1)
acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()
accs.append(acc)
return accs
best_val_acc = test_acc = 0
for epoch in range(1, 201):
train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, '
f'Val: {best_val_acc:.4f}, Test: {test_acc:.4f}')
|
https://github.com/pyg-team/pytorch_geometric
|
examples/ar_link_pred.py
|
import argparse
import os.path as osp
import torch
import torch.nn.functional as F
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import GCNConv
from torch_geometric.utils import negative_sampling, train_test_split_edges
class GCNEncoder(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv2 = GCNConv(hidden_channels, out_channels)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index).relu()
return self.conv2(x, edge_index)
class LinkPredictor(torch.nn.Module):
def __init__(self, in_channels, hidden_channels):
super().__init__()
self.lin1 = torch.nn.Linear(in_channels * 2, hidden_channels)
self.lin2 = torch.nn.Linear(hidden_channels, 1)
def forward(self, z_i, z_j):
x = torch.cat([z_i, z_j], dim=1)
x = self.lin1(x).relu()
x = self.lin2(x)
return x.view(-1)
class ARLinkPredictor(torch.nn.Module):
def __init__(self, in_channels):
super().__init__()
# Split dimensions between attract and repel
self.attract_dim = in_channels // 2
self.repel_dim = in_channels - self.attract_dim
def forward(self, z_i, z_j):
# Split into attract and repel parts
z_i_attr = z_i[:, :self.attract_dim]
z_i_repel = z_i[:, self.attract_dim:]
z_j_attr = z_j[:, :self.attract_dim]
z_j_repel = z_j[:, self.attract_dim:]
# Calculate AR score
attract_score = (z_i_attr * z_j_attr).sum(dim=1)
repel_score = (z_i_repel * z_j_repel).sum(dim=1)
return attract_score - repel_score
def train(encoder, predictor, data, optimizer):
encoder.train()
predictor.train()
# Forward pass and calculate loss
optimizer.zero_grad()
z = encoder(data.x, data.train_pos_edge_index)
# Positive edges
pos_out = predictor(z[data.train_pos_edge_index[0]],
z[data.train_pos_edge_index[1]])
# Sample and predict on negative edges
neg_edge_index = negative_sampling(
edge_index=data.train_pos_edge_index,
num_nodes=data.num_nodes,
num_neg_samples=data.train_pos_edge_index.size(1),
)
neg_out = predictor(z[neg_edge_index[0]], z[neg_edge_index[1]])
# Calculate loss
pos_loss = F.binary_cross_entropy_with_logits(pos_out,
torch.ones_like(pos_out))
neg_loss = F.binary_cross_entropy_with_logits(neg_out,
torch.zeros_like(neg_out))
loss = pos_loss + neg_loss
loss.backward()
optimizer.step()
return loss.item()
@torch.no_grad()
def test(encoder, predictor, data):
encoder.eval()
predictor.eval()
z = encoder(data.x, data.train_pos_edge_index)
pos_val_out = predictor(z[data.val_pos_edge_index[0]],
z[data.val_pos_edge_index[1]])
neg_val_out = predictor(z[data.val_neg_edge_index[0]],
z[data.val_neg_edge_index[1]])
pos_test_out = predictor(z[data.test_pos_edge_index[0]],
z[data.test_pos_edge_index[1]])
neg_test_out = predictor(z[data.test_neg_edge_index[0]],
z[data.test_neg_edge_index[1]])
val_auc = compute_auc(pos_val_out, neg_val_out)
test_auc = compute_auc(pos_test_out, neg_test_out)
return val_auc, test_auc
def compute_auc(pos_out, neg_out):
pos_out = torch.sigmoid(pos_out).cpu().numpy()
neg_out = torch.sigmoid(neg_out).cpu().numpy()
# Simple AUC calculation
from sklearn.metrics import roc_auc_score
y_true = torch.cat(
[torch.ones(pos_out.shape[0]),
torch.zeros(neg_out.shape[0])])
y_score = torch.cat([torch.tensor(pos_out), torch.tensor(neg_out)])
return roc_auc_score(y_true, y_score)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('--dataset', type=str, default='Cora',
choices=['Cora', 'CiteSeer', 'PubMed'])
parser.add_argument('--hidden_channels', type=int, default=128)
parser.add_argument('--out_channels', type=int, default=64)
parser.add_argument('--epochs', type=int, default=200)
parser.add_argument('--use_ar', action='store_true',
help='Use Attract-Repel embeddings')
parser.add_argument('--lr', type=float, default=0.01)
args = parser.parse_args()
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load dataset
transform = T.Compose([
T.NormalizeFeatures(),
T.ToDevice(device),
])
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data',
args.dataset)
dataset = Planetoid(path, args.dataset, transform=transform)
data = dataset[0]
# Process data for link prediction
data = train_test_split_edges(data)
# Initialize encoder
encoder = GCNEncoder(
in_channels=dataset.num_features,
hidden_channels=args.hidden_channels,
out_channels=args.out_channels,
).to(device)
# Choose predictor based on args
if args.use_ar:
predictor = ARLinkPredictor(in_channels=args.out_channels).to(device)
print(f"Running link prediction on {args.dataset}"
f"with Attract-Repel embeddings")
else:
predictor = LinkPredictor(
in_channels=args.out_channels,
hidden_channels=args.hidden_channels).to(device)
print(f"Running link prediction on {args.dataset}"
f"with Traditional embeddings")
optimizer = torch.optim.Adam(
list(encoder.parameters()) + list(predictor.parameters()), lr=args.lr)
best_val_auc = 0
final_test_auc = 0
for epoch in range(1, args.epochs + 1):
loss = train(encoder, predictor, data, optimizer)
val_auc, test_auc = test(encoder, predictor, data)
if val_auc > best_val_auc:
best_val_auc = val_auc
final_test_auc = test_auc
if epoch % 10 == 0:
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, '
f'Val AUC: {val_auc:.4f}, '
f'Test AUC: {test_auc:.4f}')
print(f'Final results - Val AUC: {best_val_auc:.4f}, '
f'Test AUC: {final_test_auc:.4f}')
# Calculate R-fraction if using AR
if args.use_ar:
with torch.no_grad():
z = encoder(data.x, data.train_pos_edge_index)
attr_dim = args.out_channels // 2
z_attr = z[:, :attr_dim]
z_repel = z[:, attr_dim:]
attract_norm_squared = torch.sum(z_attr**2)
repel_norm_squared = torch.sum(z_repel**2)
r_fraction = repel_norm_squared / (attract_norm_squared +
repel_norm_squared)
print(f"R-fraction: {r_fraction.item():.4f}")
if __name__ == '__main__':
main()
|
https://github.com/pyg-team/pytorch_geometric
|
examples/argva_node_clustering.py
|
import os.path as osp
import matplotlib.pyplot as plt
import torch
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from sklearn.metrics.cluster import (
completeness_score,
homogeneity_score,
v_measure_score,
)
from torch.nn import Linear
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import ARGVA, GCNConv
if torch.cuda.is_available():
device = torch.device('cuda')
elif hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
device = torch.device('mps')
else:
device = torch.device('cpu')
transform = T.Compose([
T.ToDevice(device),
T.RandomLinkSplit(num_val=0.05, num_test=0.1, is_undirected=True,
split_labels=True, add_negative_train_samples=False),
])
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'Planetoid')
dataset = Planetoid(path, name='Cora', transform=transform)
train_data, val_data, test_data = dataset[0]
class Encoder(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, hidden_channels)
self.conv_mu = GCNConv(hidden_channels, out_channels)
self.conv_logstd = GCNConv(hidden_channels, out_channels)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index).relu()
return self.conv_mu(x, edge_index), self.conv_logstd(x, edge_index)
class Discriminator(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.lin1 = Linear(in_channels, hidden_channels)
self.lin2 = Linear(hidden_channels, hidden_channels)
self.lin3 = Linear(hidden_channels, out_channels)
def forward(self, x):
x = self.lin1(x).relu()
x = self.lin2(x).relu()
return self.lin3(x)
encoder = Encoder(train_data.num_features, hidden_channels=32, out_channels=32)
discriminator = Discriminator(in_channels=32, hidden_channels=64,
out_channels=32)
model = ARGVA(encoder, discriminator).to(device)
encoder_optimizer = torch.optim.Adam(encoder.parameters(), lr=0.005)
discriminator_optimizer = torch.optim.Adam(discriminator.parameters(),
lr=0.001)
def train():
model.train()
encoder_optimizer.zero_grad()
z = model.encode(train_data.x, train_data.edge_index)
# We optimize the discriminator more frequently than the encoder.
for i in range(5):
discriminator_optimizer.zero_grad()
discriminator_loss = model.discriminator_loss(z)
discriminator_loss.backward()
discriminator_optimizer.step()
loss = model.recon_loss(z, train_data.pos_edge_label_index)
loss = loss + model.reg_loss(z)
loss = loss + (1 / train_data.num_nodes) * model.kl_loss()
loss.backward()
encoder_optimizer.step()
return float(loss)
@torch.no_grad()
def test(data):
model.eval()
z = model.encode(data.x, data.edge_index)
# Cluster embedded values using k-means.
kmeans_input = z.cpu().numpy()
kmeans = KMeans(n_clusters=7, random_state=0,
n_init='auto').fit(kmeans_input)
pred = kmeans.predict(kmeans_input)
labels = data.y.cpu().numpy()
completeness = completeness_score(labels, pred)
hm = homogeneity_score(labels, pred)
nmi = v_measure_score(labels, pred)
auc, ap = model.test(z, data.pos_edge_label_index,
data.neg_edge_label_index)
return auc, ap, completeness, hm, nmi
for epoch in range(1, 151):
loss = train()
auc, ap, completeness, hm, nmi = test(test_data)
print(f'Epoch: {epoch:03d}, Loss: {loss:.3f}, AUC: {auc:.3f}, '
f'AP: {ap:.3f}, Completeness: {completeness:.3f}, '
f'Homogeneity: {hm:.3f}, NMI: {nmi:.3f}')
@torch.no_grad()
def plot_points(data, colors):
model.eval()
z = model.encode(data.x, data.edge_index)
z = TSNE(n_components=2).fit_transform(z.cpu().numpy())
y = data.y.cpu().numpy()
plt.figure(figsize=(8, 8))
for i in range(dataset.num_classes):
plt.scatter(z[y == i, 0], z[y == i, 1], s=20, color=colors[i])
plt.axis('off')
plt.show()
colors = [
'#ffc0cb', '#bada55', '#008080', '#420420', '#7fe5f0', '#065535', '#ffd700'
]
plot_points(test_data, colors)
|
https://github.com/pyg-team/pytorch_geometric
|
examples/arma.py
|
import os.path as osp
import torch
import torch.nn.functional as F
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import ARMAConv
dataset = 'Cora'
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
data = dataset[0]
class Net(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels):
super().__init__()
self.conv1 = ARMAConv(in_channels, hidden_channels, num_stacks=3,
num_layers=2, shared_weights=True, dropout=0.25)
self.conv2 = ARMAConv(hidden_channels, out_channels, num_stacks=3,
num_layers=2, shared_weights=True, dropout=0.25,
act=lambda x: x)
def forward(self, x, edge_index):
x = F.dropout(x, training=self.training)
x = F.relu(self.conv1(x, edge_index))
x = F.dropout(x, training=self.training)
x = self.conv2(x, edge_index)
return F.log_softmax(x, dim=1)
if torch.cuda.is_available():
device = torch.device('cuda')
elif hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
device = torch.device('mps')
else:
device = torch.device('cpu')
model, data = Net(dataset.num_features, 16,
dataset.num_classes).to(device), data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-4)
def train():
model.train()
optimizer.zero_grad()
out = model(data.x, data.edge_index)
loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])
loss.backward()
optimizer.step()
def test():
model.eval()
out, accs = model(data.x, data.edge_index), []
for _, mask in data('train_mask', 'val_mask', 'test_mask'):
pred = out[mask].argmax(1)
acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()
accs.append(acc)
return accs
best_val_acc = test_acc = 0
for epoch in range(1, 401):
train()
train_acc, val_acc, tmp_test_acc = test()
if val_acc > best_val_acc:
best_val_acc = val_acc
test_acc = tmp_test_acc
print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, '
f'Val: {best_val_acc:.4f}, Test: {test_acc:.4f}')
|
https://github.com/pyg-team/pytorch_geometric
|
examples/attentive_fp.py
|
import os.path as osp
from math import sqrt
import torch
import torch.nn.functional as F
from rdkit import Chem
from torch_geometric.datasets import MoleculeNet
from torch_geometric.loader import DataLoader
from torch_geometric.nn.models import AttentiveFP
class GenFeatures:
def __init__(self):
self.symbols = [
'B', 'C', 'N', 'O', 'F', 'Si', 'P', 'S', 'Cl', 'As', 'Se', 'Br',
'Te', 'I', 'At', 'other'
]
self.hybridizations = [
Chem.rdchem.HybridizationType.SP,
Chem.rdchem.HybridizationType.SP2,
Chem.rdchem.HybridizationType.SP3,
Chem.rdchem.HybridizationType.SP3D,
Chem.rdchem.HybridizationType.SP3D2,
'other',
]
self.stereos = [
Chem.rdchem.BondStereo.STEREONONE,
Chem.rdchem.BondStereo.STEREOANY,
Chem.rdchem.BondStereo.STEREOZ,
Chem.rdchem.BondStereo.STEREOE,
]
def __call__(self, data):
# Generate AttentiveFP features according to Table 1.
mol = Chem.MolFromSmiles(data.smiles)
xs = []
for atom in mol.GetAtoms():
symbol = [0.] * len(self.symbols)
symbol[self.symbols.index(atom.GetSymbol())] = 1.
degree = [0.] * 6
degree[atom.GetDegree()] = 1.
formal_charge = atom.GetFormalCharge()
radical_electrons = atom.GetNumRadicalElectrons()
hybridization = [0.] * len(self.hybridizations)
hybridization[self.hybridizations.index(
atom.GetHybridization())] = 1.
aromaticity = 1. if atom.GetIsAromatic() else 0.
hydrogens = [0.] * 5
hydrogens[atom.GetTotalNumHs()] = 1.
chirality = 1. if atom.HasProp('_ChiralityPossible') else 0.
chirality_type = [0.] * 2
if atom.HasProp('_CIPCode'):
chirality_type[['R', 'S'].index(atom.GetProp('_CIPCode'))] = 1.
x = torch.tensor(symbol + degree + [formal_charge] +
[radical_electrons] + hybridization +
[aromaticity] + hydrogens + [chirality] +
chirality_type)
xs.append(x)
data.x = torch.stack(xs, dim=0)
edge_indices = []
edge_attrs = []
for bond in mol.GetBonds():
edge_indices += [[bond.GetBeginAtomIdx(), bond.GetEndAtomIdx()]]
edge_indices += [[bond.GetEndAtomIdx(), bond.GetBeginAtomIdx()]]
bond_type = bond.GetBondType()
single = 1. if bond_type == Chem.rdchem.BondType.SINGLE else 0.
double = 1. if bond_type == Chem.rdchem.BondType.DOUBLE else 0.
triple = 1. if bond_type == Chem.rdchem.BondType.TRIPLE else 0.
aromatic = 1. if bond_type == Chem.rdchem.BondType.AROMATIC else 0.
conjugation = 1. if bond.GetIsConjugated() else 0.
ring = 1. if bond.IsInRing() else 0.
stereo = [0.] * 4
stereo[self.stereos.index(bond.GetStereo())] = 1.
edge_attr = torch.tensor(
[single, double, triple, aromatic, conjugation, ring] + stereo)
edge_attrs += [edge_attr, edge_attr]
if len(edge_attrs) == 0:
data.edge_index = torch.zeros((2, 0), dtype=torch.long)
data.edge_attr = torch.zeros((0, 10), dtype=torch.float)
else:
data.edge_index = torch.tensor(edge_indices).t().contiguous()
data.edge_attr = torch.stack(edge_attrs, dim=0)
return data
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'AFP_Mol')
dataset = MoleculeNet(path, name='ESOL', pre_transform=GenFeatures()).shuffle()
N = len(dataset) // 10
val_dataset = dataset[:N]
test_dataset = dataset[N:2 * N]
train_dataset = dataset[2 * N:]
train_loader = DataLoader(train_dataset, batch_size=200, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=200)
test_loader = DataLoader(test_dataset, batch_size=200)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AttentiveFP(in_channels=39, hidden_channels=200, out_channels=1,
edge_dim=10, num_layers=2, num_timesteps=2,
dropout=0.2).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=10**-2.5,
weight_decay=10**-5)
def train():
total_loss = total_examples = 0
for data in train_loader:
data = data.to(device)
optimizer.zero_grad()
out = model(data.x, data.edge_index, data.edge_attr, data.batch)
loss = F.mse_loss(out, data.y)
loss.backward()
optimizer.step()
total_loss += float(loss) * data.num_graphs
total_examples += data.num_graphs
return sqrt(total_loss / total_examples)
@torch.no_grad()
def test(loader):
mse = []
for data in loader:
data = data.to(device)
out = model(data.x, data.edge_index, data.edge_attr, data.batch)
mse.append(F.mse_loss(out, data.y, reduction='none').cpu())
return float(torch.cat(mse, dim=0).mean().sqrt())
for epoch in range(1, 201):
train_rmse = train()
val_rmse = test(val_loader)
test_rmse = test(test_loader)
print(f'Epoch: {epoch:03d}, Loss: {train_rmse:.4f} Val: {val_rmse:.4f} '
f'Test: {test_rmse:.4f}')
|
https://github.com/pyg-team/pytorch_geometric
|
examples/autoencoder.py
|
import argparse
import os.path as osp
import time
import torch
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import GAE, VGAE, GCNConv
parser = argparse.ArgumentParser()
parser.add_argument('--variational', action='store_true')
parser.add_argument('--linear', action='store_true')
parser.add_argument('--dataset', type=str, default='Cora',
choices=['Cora', 'CiteSeer', 'PubMed'])
parser.add_argument('--epochs', type=int, default=400)
args = parser.parse_args()
if torch.cuda.is_available():
device = torch.device('cuda')
elif hasattr(torch.backends, 'mps') and torch.backends.mps.is_available():
device = torch.device('mps')
else:
device = torch.device('cpu')
transform = T.Compose([
T.NormalizeFeatures(),
T.ToDevice(device),
T.RandomLinkSplit(num_val=0.05, num_test=0.1, is_undirected=True,
split_labels=True, add_negative_train_samples=False),
])
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'Planetoid')
dataset = Planetoid(path, args.dataset, transform=transform)
train_data, val_data, test_data = dataset[0]
class GCNEncoder(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, 2 * out_channels)
self.conv2 = GCNConv(2 * out_channels, out_channels)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index).relu()
return self.conv2(x, edge_index)
class VariationalGCNEncoder(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv1 = GCNConv(in_channels, 2 * out_channels)
self.conv_mu = GCNConv(2 * out_channels, out_channels)
self.conv_logstd = GCNConv(2 * out_channels, out_channels)
def forward(self, x, edge_index):
x = self.conv1(x, edge_index).relu()
return self.conv_mu(x, edge_index), self.conv_logstd(x, edge_index)
class LinearEncoder(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv = GCNConv(in_channels, out_channels)
def forward(self, x, edge_index):
return self.conv(x, edge_index)
class VariationalLinearEncoder(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv_mu = GCNConv(in_channels, out_channels)
self.conv_logstd = GCNConv(in_channels, out_channels)
def forward(self, x, edge_index):
return self.conv_mu(x, edge_index), self.conv_logstd(x, edge_index)
in_channels, out_channels = dataset.num_features, 16
if not args.variational and not args.linear:
model = GAE(GCNEncoder(in_channels, out_channels))
elif not args.variational and args.linear:
model = GAE(LinearEncoder(in_channels, out_channels))
elif args.variational and not args.linear:
model = VGAE(VariationalGCNEncoder(in_channels, out_channels))
elif args.variational and args.linear:
model = VGAE(VariationalLinearEncoder(in_channels, out_channels))
model = model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
def train():
model.train()
optimizer.zero_grad()
z = model.encode(train_data.x, train_data.edge_index)
loss = model.recon_loss(z, train_data.pos_edge_label_index)
if args.variational:
loss = loss + (1 / train_data.num_nodes) * model.kl_loss()
loss.backward()
optimizer.step()
return float(loss)
@torch.no_grad()
def test(data):
model.eval()
z = model.encode(data.x, data.edge_index)
return model.test(z, data.pos_edge_label_index, data.neg_edge_label_index)
times = []
for epoch in range(1, args.epochs + 1):
start = time.time()
loss = train()
auc, ap = test(test_data)
print(f'Epoch: {epoch:03d}, AUC: {auc:.4f}, AP: {ap:.4f}')
times.append(time.time() - start)
print(f"Median time per epoch: {torch.tensor(times).median():.4f}s")
|
https://github.com/pyg-team/pytorch_geometric
|
examples/cluster_gcn_ppi.py
|
import os.path as osp
import time
import torch
import torch.nn.functional as F
from sklearn.metrics import f1_score
from torch_geometric.data import Batch
from torch_geometric.datasets import PPI
from torch_geometric.loader import ClusterData, ClusterLoader, DataLoader
from torch_geometric.nn import BatchNorm, SAGEConv
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'PPI')
train_dataset = PPI(path, split='train')
val_dataset = PPI(path, split='val')
test_dataset = PPI(path, split='test')
train_data = Batch.from_data_list(train_dataset)
cluster_data = ClusterData(train_data, num_parts=50, recursive=False,
save_dir=train_dataset.processed_dir)
train_loader = ClusterLoader(cluster_data, batch_size=1, shuffle=True,
num_workers=12)
val_loader = DataLoader(val_dataset, batch_size=2, shuffle=False)
test_loader = DataLoader(test_dataset, batch_size=2, shuffle=False)
class Net(torch.nn.Module):
def __init__(self, in_channels, hidden_channels, out_channels, num_layers):
super().__init__()
self.convs = torch.nn.ModuleList()
self.batch_norms = torch.nn.ModuleList()
self.convs.append(SAGEConv(in_channels, hidden_channels))
self.batch_norms.append(BatchNorm(hidden_channels))
for _ in range(num_layers - 2):
self.convs.append(SAGEConv(hidden_channels, hidden_channels))
self.batch_norms.append(BatchNorm(hidden_channels))
self.convs.append(SAGEConv(hidden_channels, out_channels))
def forward(self, x, edge_index):
for conv, batch_norm in zip(self.convs[:-1], self.batch_norms):
x = conv(x, edge_index)
x = batch_norm(x)
x = F.relu(x)
x = F.dropout(x, p=0.2, training=self.training)
return self.convs[-1](x, edge_index)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Net(in_channels=train_dataset.num_features, hidden_channels=1024,
out_channels=train_dataset.num_classes, num_layers=6).to(device)
loss_op = torch.nn.BCEWithLogitsLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
def train():
model.train()
total_loss = 0
for data in train_loader:
data = data.to(device)
optimizer.zero_grad()
loss = loss_op(model(data.x, data.edge_index), data.y)
loss.backward()
optimizer.step()
total_loss += loss.item() * data.num_nodes
return total_loss / train_data.num_nodes
@torch.no_grad()
def test(loader):
model.eval()
ys, preds = [], []
for data in loader:
ys.append(data.y)
out = model(data.x.to(device), data.edge_index.to(device))
preds.append((out > 0).float().cpu())
y, pred = torch.cat(ys, dim=0).numpy(), torch.cat(preds, dim=0).numpy()
return f1_score(y, pred, average='micro') if pred.sum() > 0 else 0
times = []
for epoch in range(1, 201):
start = time.time()
loss = train()
val_f1 = test(val_loader)
test_f1 = test(test_loader)
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Val: {val_f1:.4f}, '
f'Test: {test_f1:.4f}')
times.append(time.time() - start)
print(f"Median time per epoch: {torch.tensor(times).median():.4f}s")
|
https://github.com/pyg-team/pytorch_geometric
|
examples/cluster_gcn_reddit.py
|
import time
import torch
import torch.nn.functional as F
from torch.nn import ModuleList
from tqdm import tqdm
from torch_geometric.datasets import Reddit
from torch_geometric.loader import ClusterData, ClusterLoader, NeighborLoader
from torch_geometric.nn import SAGEConv
dataset = Reddit('../data/Reddit')
data = dataset[0]
cluster_data = ClusterData(data, num_parts=1500, recursive=False,
save_dir=dataset.processed_dir)
train_loader = ClusterLoader(cluster_data, batch_size=20, shuffle=True,
num_workers=12)
subgraph_loader = NeighborLoader(data, num_neighbors=[-1], batch_size=1024,
shuffle=False, num_workers=12)
class Net(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.convs = ModuleList(
[SAGEConv(in_channels, 128),
SAGEConv(128, out_channels)])
def forward(self, x, edge_index):
for i, conv in enumerate(self.convs):
x = conv(x, edge_index)
if i != len(self.convs) - 1:
x = F.relu(x)
x = F.dropout(x, p=0.5, training=self.training)
return F.log_softmax(x, dim=-1)
def inference(self, x_all):
pbar = tqdm(total=x_all.size(0) * len(self.convs))
pbar.set_description('Evaluating')
# Compute representations of nodes layer by layer, using *all*
# available edges. This leads to faster computation in contrast to
# immediately computing the final representations of each batch.
for i, conv in enumerate(self.convs):
xs = []
for batch in subgraph_loader:
edge_index = batch.edge_index.to(device)
x = x_all[batch.n_id].to(device)
x_target = x[:batch.batch_size]
x = conv((x, x_target), edge_index)
if i != len(self.convs) - 1:
x = F.relu(x)
xs.append(x.cpu())
pbar.update(batch.batch_size)
x_all = torch.cat(xs, dim=0)
pbar.close()
return x_all
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Net(dataset.num_features, dataset.num_classes).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.005)
def train():
model.train()
total_loss = total_nodes = 0
for batch in train_loader:
batch = batch.to(device)
optimizer.zero_grad()
out = model(batch.x, batch.edge_index)
loss = F.nll_loss(out[batch.train_mask], batch.y[batch.train_mask])
loss.backward()
optimizer.step()
nodes = batch.train_mask.sum().item()
total_loss += loss.item() * nodes
total_nodes += nodes
return total_loss / total_nodes
@torch.no_grad()
def test(): # Inference should be performed on the full graph.
model.eval()
out = model.inference(data.x)
y_pred = out.argmax(dim=-1)
accs = []
for mask in [data.train_mask, data.val_mask, data.test_mask]:
correct = y_pred[mask].eq(data.y[mask]).sum().item()
accs.append(correct / mask.sum().item())
return accs
times = []
for epoch in range(1, 31):
start = time.time()
loss = train()
if epoch % 5 == 0:
train_acc, val_acc, test_acc = test()
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}, Train: {train_acc:.4f}, '
f'Val: {val_acc:.4f}, test: {test_acc:.4f}')
else:
print(f'Epoch: {epoch:02d}, Loss: {loss:.4f}')
times.append(time.time() - start)
print(f"Median time per epoch: {torch.tensor(times).median():.4f}s")
|
https://github.com/pyg-team/pytorch_geometric
|
examples/colors_topk_pool.py
|
import copy
import os.path as osp
import torch
import torch.nn.functional as F
from torch.nn import Linear as Lin
from torch.nn import ReLU
from torch.nn import Sequential as Seq
from torch_geometric.datasets import TUDataset
from torch_geometric.loader import DataLoader
from torch_geometric.nn import GINConv, TopKPooling, global_add_pool
from torch_geometric.utils import scatter
class HandleNodeAttention:
def __call__(self, data):
data = copy.copy(data)
data.attn = torch.softmax(data.x[:, 0], dim=0)
data.x = data.x[:, 1:]
return data
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', 'COLORS-3')
dataset = TUDataset(path, 'COLORS-3', use_node_attr=True,
transform=HandleNodeAttention())
train_loader = DataLoader(dataset[:500], batch_size=60, shuffle=True)
val_loader = DataLoader(dataset[500:3000], batch_size=60)
test_loader = DataLoader(dataset[3000:], batch_size=60)
class Net(torch.nn.Module):
def __init__(self, in_channels):
super().__init__()
self.conv1 = GINConv(Seq(Lin(in_channels, 64), ReLU(), Lin(64, 64)))
self.pool1 = TopKPooling(in_channels, min_score=0.05)
self.conv2 = GINConv(Seq(Lin(64, 64), ReLU(), Lin(64, 64)))
self.lin = torch.nn.Linear(64, 1)
def forward(self, data):
x, edge_index, batch = data.x, data.edge_index, data.batch
out = F.relu(self.conv1(x, edge_index))
out, edge_index, _, batch, perm, score = self.pool1(
out, edge_index, None, batch, attn=x)
ratio = out.size(0) / x.size(0)
out = F.relu(self.conv2(out, edge_index))
out = global_add_pool(out, batch)
out = self.lin(out).view(-1)
attn_loss = F.kl_div(torch.log(score + 1e-14), data.attn[perm],
reduction='none')
attn_loss = scatter(attn_loss, batch, reduce='mean')
return out, attn_loss, ratio
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Net(dataset.num_features).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# Initialize to optimal attention weights:
# model.pool1.weight.data = torch.tensor([0., 1., 0., 0.]).view(1,4).to(device)
def train(epoch):
model.train()
total_loss = 0
for data in train_loader:
data = data.to(device)
optimizer.zero_grad()
out, attn_loss, _ = model(data)
loss = ((out - data.y).pow(2) + 100 * attn_loss).mean()
loss.backward()
total_loss += loss.item() * data.num_graphs
optimizer.step()
return total_loss / len(train_loader.dataset)
def test(loader):
model.eval()
corrects, total_ratio = [], 0
for data in loader:
data = data.to(device)
out, _, ratio = model(data)
pred = out.round().to(torch.long)
corrects.append(pred.eq(data.y.to(torch.long)))
total_ratio += ratio
return torch.cat(corrects, dim=0), total_ratio / len(loader)
for epoch in range(1, 301):
loss = train(epoch)
train_correct, train_ratio = test(train_loader)
val_correct, val_ratio = test(val_loader)
test_correct, test_ratio = test(test_loader)
train_acc = train_correct.sum().item() / train_correct.size(0)
val_acc = val_correct.sum().item() / val_correct.size(0)
test_acc1 = test_correct[:2500].sum().item() / 2500
test_acc2 = test_correct[2500:5000].sum().item() / 2500
test_acc3 = test_correct[5000:].sum().item() / 2500
print(f'Epoch: {epoch:03d}, Loss: {loss:.4f}, Train: {train_acc:.3f}, '
f'Val: {val_acc:.3f}, Test Orig: {test_acc1:.3f}, '
f'Test Large: {test_acc2:.3f}, Test LargeC: {test_acc3:.3f}, '
f'Train/Val/Test Ratio='
f'{train_ratio:.3f}/{val_ratio:.3f}/{test_ratio:.3f}')
|
https://github.com/pyg-team/pytorch_geometric
|
examples/cora.py
|
import os.path as osp
import torch
import torch.nn.functional as F
import torch_geometric.transforms as T
from torch_geometric.datasets import Planetoid
from torch_geometric.nn import SplineConv
from torch_geometric.typing import WITH_TORCH_SPLINE_CONV
if not WITH_TORCH_SPLINE_CONV:
quit("This example requires 'torch-spline-conv'")
dataset = 'Cora'
transform = T.Compose([
T.RandomNodeSplit(num_val=500, num_test=500),
T.TargetIndegree(),
])
path = osp.join(osp.dirname(osp.realpath(__file__)), '..', 'data', dataset)
dataset = Planetoid(path, dataset, transform=transform)
data = dataset[0]
class Net(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv1 = SplineConv(dataset.num_features, 16, dim=1, kernel_size=2)
self.conv2 = SplineConv(16, dataset.num_classes, dim=1, kernel_size=2)
def forward(self):
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
x = F.dropout(x, training=self.training)
x = F.elu(self.conv1(x, edge_index, edge_attr))
x = F.dropout(x, training=self.training)
x = self.conv2(x, edge_index, edge_attr)
return F.log_softmax(x, dim=1)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model, data = Net().to(device), data.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01, weight_decay=5e-3)
def train():
model.train()
optimizer.zero_grad()
F.nll_loss(model()[data.train_mask], data.y[data.train_mask]).backward()
optimizer.step()
@torch.no_grad()
def test():
model.eval()
log_probs, accs = model(), []
for _, mask in data('train_mask', 'test_mask'):
pred = log_probs[mask].max(1)[1]
acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()
accs.append(acc)
return accs
for epoch in range(1, 201):
train()
train_acc, test_acc = test()
print(f'Epoch: {epoch:03d}, Train: {train_acc:.4f}, Test: {test_acc:.4f}')
|
https://github.com/pandas-dev/pandas
|
README.md
|
<picture align="center">
<source media="(prefers-color-scheme: dark)" srcset="https://pandas.pydata.org/static/img/pandas_white.svg">
<img alt="Pandas Logo" src="https://pandas.pydata.org/static/img/pandas.svg">
</picture>
-----------------
# pandas: powerful Python data analysis toolkit
| | |
| --- | --- |
| Testing | [](https://github.com/pandas-dev/pandas/actions/workflows/unit-tests.yml) [](https://codecov.io/gh/pandas-dev/pandas) |
| Package | [](https://pypi.org/project/pandas/) [](https://pypi.org/project/pandas/) [](https://anaconda.org/conda-forge/pandas) [](https://anaconda.org/conda-forge/pandas) |
| Meta | [](https://numfocus.org) [](https://doi.org/10.5281/zenodo.3509134) [](https://github.com/pandas-dev/pandas/blob/main/LICENSE) [](https://pandas.pydata.org/docs/dev/development/community.html?highlight=slack#community-slack) |
## What is it?
**pandas** is a Python package that provides fast, flexible, and expressive data
structures designed to make working with "relational" or "labeled" data both
easy and intuitive. It aims to be the fundamental high-level building block for
doing practical, **real world** data analysis in Python. Additionally, it has
the broader goal of becoming **the most powerful and flexible open source data
analysis / manipulation tool available in any language**. It is already well on
its way towards this goal.
## Table of Contents
- [Main Features](#main-features)
- [Where to get it](#where-to-get-it)
- [Dependencies](#dependencies)
- [Installation from sources](#installation-from-sources)
- [License](#license)
- [Documentation](#documentation)
- [Background](#background)
- [Getting Help](#getting-help)
- [Discussion and Development](#discussion-and-development)
- [Contributing to pandas](#contributing-to-pandas)
## Main Features
Here are just a few of the things that pandas does well:
- Easy handling of [**missing data**][missing-data] (represented as
`NaN`, `NA`, or `NaT`) in floating point as well as non-floating point data
- Size mutability: columns can be [**inserted and
deleted**][insertion-deletion] from DataFrame and higher dimensional
objects
- Automatic and explicit [**data alignment**][alignment]: objects can
be explicitly aligned to a set of labels, or the user can simply
ignore the labels and let `Series`, `DataFrame`, etc. automatically
align the data for you in computations
- Powerful, flexible [**group by**][groupby] functionality to perform
split-apply-combine operations on data sets, for both aggregating
and transforming data
- Make it [**easy to convert**][conversion] ragged,
differently-indexed data in other Python and NumPy data structures
into DataFrame objects
- Intelligent label-based [**slicing**][slicing], [**fancy
indexing**][fancy-indexing], and [**subsetting**][subsetting] of
large data sets
- Intuitive [**merging**][merging] and [**joining**][joining] data
sets
- Flexible [**reshaping**][reshape] and [**pivoting**][pivot-table] of
data sets
- [**Hierarchical**][mi] labeling of axes (possible to have multiple
labels per tick)
- Robust IO tools for loading data from [**flat files**][flat-files]
(CSV and delimited), [**Excel files**][excel], [**databases**][db],
and saving/loading data from the ultrafast [**HDF5 format**][hdfstore]
- [**Time series**][timeseries]-specific functionality: date range
generation and frequency conversion, moving window statistics,
date shifting and lagging
[missing-data]: https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html
[insertion-deletion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#column-selection-addition-deletion
[alignment]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html?highlight=alignment#intro-to-data-structures
[groupby]: https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#group-by-split-apply-combine
[conversion]: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html#dataframe
[slicing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#slicing-ranges
[fancy-indexing]: https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#advanced
[subsetting]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing
[merging]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#database-style-dataframe-or-named-series-joining-merging
[joining]: https://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html#joining-on-index
[reshape]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
[pivot-table]: https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html
[mi]: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#hierarchical-indexing-multiindex
[flat-files]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#csv-text-files
[excel]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#excel-files
[db]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#sql-queries
[hdfstore]: https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#hdf5-pytables
[timeseries]: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#time-series-date-functionality
## Where to get it
The source code is currently hosted on GitHub at:
https://github.com/pandas-dev/pandas
Binary installers for the latest released version are available at the [Python
Package Index (PyPI)](https://pypi.org/project/pandas) and on [Conda](https://anaconda.org/conda-forge/pandas).
```sh
# conda
conda install -c conda-forge pandas
```
```sh
# or PyPI
pip install pandas
```
The list of changes to pandas between each release can be found
[here](https://pandas.pydata.org/pandas-docs/stable/whatsnew/index.html). For full
details, see the commit logs at https://github.com/pandas-dev/pandas.
## Dependencies
- [NumPy - Adds support for large, multi-dimensional arrays, matrices and high-level mathematical functions to operate on these arrays](https://www.numpy.org)
- [python-dateutil - Provides powerful extensions to the standard datetime module](https://dateutil.readthedocs.io/en/stable/index.html)
- [pytz - Brings the Olson tz database into Python which allows accurate and cross platform timezone calculations](https://github.com/stub42/pytz)
See the [full installation instructions](https://pandas.pydata.org/pandas-docs/stable/install.html#dependencies) for minimum supported versions of required, recommended and optional dependencies.
## Installation from sources
To install pandas from source you need [Cython](https://cython.org/) in addition to the normal
dependencies above. Cython can be installed from PyPI:
```sh
pip install cython
```
In the `pandas` directory (same one where you found this file after
cloning the git repo), execute:
```sh
pip install .
```
or for installing in [development mode](https://pip.pypa.io/en/latest/cli/pip_install/#install-editable):
```sh
python -m pip install -ve . --no-build-isolation -Ceditable-verbose=true
```
See the full instructions for [installing from source](https://pandas.pydata.org/docs/dev/development/contributing_environment.html).
## License
[BSD 3](LICENSE)
## Documentation
The official documentation is hosted on [PyData.org](https://pandas.pydata.org/pandas-docs/stable/).
## Background
Work on ``pandas`` started at [AQR](https://www.aqr.com/) (a quantitative hedge fund) in 2008 and
has been under active development since then.
## Getting Help
For usage questions, the best place to go to is [StackOverflow](https://stackoverflow.com/questions/tagged/pandas).
Further, general questions and discussions can also take place on the [pydata mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata).
## Discussion and Development
Most development discussions take place on GitHub in this repo, via the [GitHub issue tracker](https://github.com/pandas-dev/pandas/issues).
Further, the [pandas-dev mailing list](https://mail.python.org/mailman/listinfo/pandas-dev) can also be used for specialized discussions or design issues, and a [Slack channel](https://pandas.pydata.org/docs/dev/development/community.html?highlight=slack#community-slack) is available for quick development related questions.
There are also frequent [community meetings](https://pandas.pydata.org/docs/dev/development/community.html#community-meeting) for project maintainers open to the community as well as monthly [new contributor meetings](https://pandas.pydata.org/docs/dev/development/community.html#new-contributor-meeting) to help support new contributors.
Additional information on the communication channels can be found on the [contributor community](https://pandas.pydata.org/docs/development/community.html) page.
## Contributing to pandas
[](https://www.codetriage.com/pandas-dev/pandas)
All contributions, bug reports, bug fixes, documentation improvements, enhancements, and ideas are welcome.
A detailed overview on how to contribute can be found in the **[contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html)**.
If you are simply looking to start working with the pandas codebase, navigate to the [GitHub "issues" tab](https://github.com/pandas-dev/pandas/issues) and start looking through interesting issues. There are a number of issues listed under [Docs](https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open) and [good first issue](https://github.com/pandas-dev/pandas/issues?labels=good+first+issue&sort=updated&state=open) where you could start out.
You can also triage issues which may include reproducing bug reports, or asking for vital information such as version numbers or reproduction instructions. If you would like to start triaging issues, one easy way to get started is to [subscribe to pandas on CodeTriage](https://www.codetriage.com/pandas-dev/pandas).
Or maybe through using pandas you have an idea of your own or are looking for something in the documentation and thinking ‘this can be improved’...you can do something about it!
Feel free to ask questions on the [mailing list](https://groups.google.com/forum/?fromgroups#!forum/pydata) or on [Slack](https://pandas.pydata.org/docs/dev/development/community.html?highlight=slack#community-slack).
As contributors and maintainers to this project, you are expected to abide by pandas' code of conduct. More information can be found at: [Contributor Code of Conduct](https://github.com/pandas-dev/.github/blob/master/CODE_OF_CONDUCT.md)
<hr>
[Go to Top](#table-of-contents)
|
https://github.com/pandas-dev/pandas
|
generate_pxi.py
|
import argparse
import os
from Cython import Tempita
def process_tempita(pxifile, outfile) -> None:
with open(pxifile, encoding="utf-8") as f:
tmpl = f.read()
pyxcontent = Tempita.sub(tmpl)
with open(outfile, "w", encoding="utf-8") as f:
f.write(pyxcontent)
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument("infile", type=str, help="Path to the input file")
parser.add_argument("-o", "--outdir", type=str, help="Path to the output directory")
args = parser.parse_args()
if not args.infile.endswith(".in"):
raise ValueError(f"Unexpected extension: {args.infile}")
outdir_abs = os.path.join(os.getcwd(), args.outdir)
outfile = os.path.join(
outdir_abs, os.path.splitext(os.path.split(args.infile)[1])[0]
)
process_tempita(args.infile, outfile)
main()
|
https://github.com/pandas-dev/pandas
|
generate_version.py
|
#!/usr/bin/env python3
# Note: This file has to live next to setup.py or versioneer will not work
import argparse
import os
import sys
import versioneer
sys.path.insert(0, "")
def write_version_info(path) -> None:
version = None
git_version = None
try:
import _version_meson
version = _version_meson.__version__
git_version = _version_meson.__git_version__
except ImportError:
version = versioneer.get_version()
git_version = versioneer.get_versions()["full-revisionid"]
if os.environ.get("MESON_DIST_ROOT"):
path = os.path.join(os.environ.get("MESON_DIST_ROOT"), path)
with open(path, "w", encoding="utf-8") as file:
file.write(f'__version__="{version}"\n')
file.write(f'__git_version__="{git_version}"\n')
def main() -> None:
parser = argparse.ArgumentParser()
parser.add_argument(
"-o",
"--outfile",
type=str,
help="Path to write version info to",
required=False,
)
parser.add_argument(
"--print",
default=False,
action="store_true",
help="Whether to print out the version",
required=False,
)
args = parser.parse_args()
if args.outfile:
if not args.outfile.endswith(".py"):
raise ValueError(
f"Output file must be a Python file. "
f"Got: {args.outfile} as filename instead"
)
write_version_info(args.outfile)
if args.print:
try:
import _version_meson
version = _version_meson.__version__
except ImportError:
version = versioneer.get_version()
print(version)
main()
|
https://github.com/pandas-dev/pandas
|
setup.py
|
#!/usr/bin/env python3
"""
Parts of this file were taken from the pyzmq project
(https://github.com/zeromq/pyzmq) which have been permitted for use under the
BSD license. Parts are from lxml (https://github.com/lxml/lxml)
"""
import argparse
import multiprocessing
import os
from os.path import join as pjoin
import platform
import shutil
import sys
from sysconfig import get_config_vars
import numpy
from pkg_resources import parse_version
from setuptools import (
Command,
Extension,
setup,
)
from setuptools.command.build_ext import build_ext as _build_ext
import versioneer
cmdclass = versioneer.get_cmdclass()
def is_platform_windows():
return sys.platform in ("win32", "cygwin")
def is_platform_mac():
return sys.platform == "darwin"
# note: sync with pyproject.toml, environment.yml and asv.conf.json
min_cython_ver = "3.0"
try:
from Cython import (
Tempita,
__version__ as _CYTHON_VERSION,
)
from Cython.Build import cythonize
_CYTHON_INSTALLED = parse_version(_CYTHON_VERSION) >= parse_version(min_cython_ver)
except ImportError:
_CYTHON_VERSION = None
_CYTHON_INSTALLED = False
cythonize = lambda x, *args, **kwargs: x # dummy func
_pxi_dep_template = {
"algos": ["_libs/algos_common_helper.pxi.in", "_libs/algos_take_helper.pxi.in"],
"hashtable": [
"_libs/hashtable_class_helper.pxi.in",
"_libs/hashtable_func_helper.pxi.in",
"_libs/khash_for_primitive_helper.pxi.in",
],
"index": ["_libs/index_class_helper.pxi.in"],
"sparse": ["_libs/sparse_op_helper.pxi.in"],
"interval": ["_libs/intervaltree.pxi.in"],
}
_pxifiles = []
_pxi_dep = {}
for module, files in _pxi_dep_template.items():
pxi_files = [pjoin("pandas", x) for x in files]
_pxifiles.extend(pxi_files)
_pxi_dep[module] = pxi_files
class build_ext(_build_ext):
@classmethod
def render_templates(cls, pxifiles) -> None:
for pxifile in pxifiles:
# build pxifiles first, template extension must be .pxi.in
assert pxifile.endswith(".pxi.in")
outfile = pxifile[:-3]
if (
os.path.exists(outfile)
and os.stat(pxifile).st_mtime < os.stat(outfile).st_mtime
):
# if .pxi.in is not updated, no need to output .pxi
continue
with open(pxifile, encoding="utf-8") as f:
tmpl = f.read()
pyxcontent = Tempita.sub(tmpl)
with open(outfile, "w", encoding="utf-8") as f:
f.write(pyxcontent)
def build_extensions(self) -> None:
# if building from c files, don't need to
# generate template output
if _CYTHON_INSTALLED:
self.render_templates(_pxifiles)
super().build_extensions()
class CleanCommand(Command):
"""Custom command to clean the .so and .pyc files."""
user_options = [("all", "a", "")]
def initialize_options(self) -> None:
self.all = True
self._clean_me = []
self._clean_trees = []
base = pjoin("pandas", "_libs", "src")
parser = pjoin(base, "parser")
vendored = pjoin(base, "vendored")
dt = pjoin(base, "datetime")
ujson_python = pjoin(vendored, "ujson", "python")
ujson_lib = pjoin(vendored, "ujson", "lib")
self._clean_exclude = [
pjoin(vendored, "numpy", "datetime", "np_datetime.c"),
pjoin(vendored, "numpy", "datetime", "np_datetime_strings.c"),
pjoin(dt, "date_conversions.c"),
pjoin(parser, "tokenizer.c"),
pjoin(parser, "io.c"),
pjoin(ujson_python, "ujson.c"),
pjoin(ujson_python, "objToJSON.c"),
pjoin(ujson_python, "JSONtoObj.c"),
pjoin(ujson_lib, "ultrajsonenc.c"),
pjoin(ujson_lib, "ultrajsondec.c"),
pjoin(dt, "pd_datetime.c"),
pjoin(parser, "pd_parser.c"),
]
for root, dirs, files in os.walk("pandas"):
for f in files:
filepath = pjoin(root, f)
if filepath in self._clean_exclude:
continue
if os.path.splitext(f)[-1] in (
".pyc",
".so",
".o",
".pyo",
".pyd",
".c",
".cpp",
".orig",
):
self._clean_me.append(filepath)
self._clean_trees.append(pjoin(root, d) for d in dirs if d == "__pycache__")
# clean the generated pxi files
for pxifile in _pxifiles:
pxifile_replaced = pxifile.replace(".pxi.in", ".pxi")
self._clean_me.append(pxifile_replaced)
self._clean_trees.append(d for d in ("build", "dist") if os.path.exists(d))
def finalize_options(self) -> None:
pass
def run(self) -> None:
for clean_me in self._clean_me:
try:
os.unlink(clean_me)
except OSError:
pass
for clean_tree in self._clean_trees:
try:
shutil.rmtree(clean_tree)
except OSError:
pass
# we need to inherit from the versioneer
# class as it encodes the version info
sdist_class = cmdclass["sdist"]
class CheckSDist(sdist_class):
"""Custom sdist that ensures Cython has compiled all pyx files to c."""
_pyxfiles = [
"pandas/_libs/arrays.pyx",
"pandas/_libs/lib.pyx",
"pandas/_libs/hashtable.pyx",
"pandas/_libs/tslib.pyx",
"pandas/_libs/index.pyx",
"pandas/_libs/internals.pyx",
"pandas/_libs/algos.pyx",
"pandas/_libs/join.pyx",
"pandas/_libs/indexing.pyx",
"pandas/_libs/interval.pyx",
"pandas/_libs/hashing.pyx",
"pandas/_libs/missing.pyx",
"pandas/_libs/testing.pyx",
"pandas/_libs/sparse.pyx",
"pandas/_libs/ops.pyx",
"pandas/_libs/parsers.pyx",
"pandas/_libs/tslibs/base.pyx",
"pandas/_libs/tslibs/ccalendar.pyx",
"pandas/_libs/tslibs/dtypes.pyx",
"pandas/_libs/tslibs/period.pyx",
"pandas/_libs/tslibs/strptime.pyx",
"pandas/_libs/tslibs/np_datetime.pyx",
"pandas/_libs/tslibs/timedeltas.pyx",
"pandas/_libs/tslibs/timestamps.pyx",
"pandas/_libs/tslibs/timezones.pyx",
"pandas/_libs/tslibs/conversion.pyx",
"pandas/_libs/tslibs/fields.pyx",
"pandas/_libs/tslibs/offsets.pyx",
"pandas/_libs/tslibs/parsing.pyx",
"pandas/_libs/tslibs/tzconversion.pyx",
"pandas/_libs/tslibs/vectorized.pyx",
"pandas/_libs/window/indexers.pyx",
"pandas/_libs/writers.pyx",
"pandas/_libs/sas.pyx",
"pandas/_libs/byteswap.pyx",
]
_cpp_pyxfiles = [
"pandas/_libs/window/aggregations.pyx",
]
def initialize_options(self) -> None:
sdist_class.initialize_options(self)
def run(self) -> None:
if "cython" in cmdclass:
self.run_command("cython")
else:
# If we are not running cython then
# compile the extensions correctly
pyx_files = [(self._pyxfiles, "c"), (self._cpp_pyxfiles, "cpp")]
for pyxfiles, extension in pyx_files:
for pyxfile in pyxfiles:
sourcefile = pyxfile[:-3] + extension
msg = (
f"{extension}-source file '{sourcefile}' not found.\n"
"Run 'setup.py cython' before sdist."
)
assert os.path.isfile(sourcefile), msg
sdist_class.run(self)
class CheckingBuildExt(build_ext):
"""
Subclass build_ext to get clearer report if Cython is necessary.
"""
def check_cython_extensions(self, extensions) -> None:
for ext in extensions:
for src in ext.sources:
if not os.path.exists(src):
print(f"{ext.name}: -> [{ext.sources}]")
raise Exception(
f"""Cython-generated file '{src}' not found.
Cython is required to compile pandas from a development branch.
Please install Cython or download a release package of pandas.
"""
)
def build_extensions(self) -> None:
self.check_cython_extensions(self.extensions)
build_ext.build_extensions(self)
class CythonCommand(build_ext):
"""
Custom command subclassed from Cython.Distutils.build_ext
to compile pyx->c, and stop there. All this does is override the
C-compile method build_extension() with a no-op.
"""
def build_extension(self, ext) -> None:
pass
class DummyBuildSrc(Command):
"""numpy's build_src command interferes with Cython's build_ext."""
user_options = []
def initialize_options(self) -> None:
self.py_modules_dict = {}
def finalize_options(self) -> None:
pass
def run(self) -> None:
pass
cmdclass["clean"] = CleanCommand
cmdclass["build_ext"] = CheckingBuildExt
if _CYTHON_INSTALLED:
suffix = ".pyx"
cmdclass["cython"] = CythonCommand
else:
suffix = ".c"
cmdclass["build_src"] = DummyBuildSrc
# ----------------------------------------------------------------------
# Preparation of compiler arguments
debugging_symbols_requested = "--with-debugging-symbols" in sys.argv
if debugging_symbols_requested:
sys.argv.remove("--with-debugging-symbols")
if sys.byteorder == "big":
endian_macro = [("__BIG_ENDIAN__", "1")]
else:
endian_macro = [("__LITTLE_ENDIAN__", "1")]
extra_compile_args = []
extra_link_args = []
if is_platform_windows():
if debugging_symbols_requested:
extra_compile_args.append("/Z7")
extra_link_args.append("/DEBUG")
else:
# PANDAS_CI=1 is set in CI
if os.environ.get("PANDAS_CI", "0") == "1":
extra_compile_args.append("-Werror")
if debugging_symbols_requested:
extra_compile_args.append("-g3")
extra_compile_args.append("-UNDEBUG")
extra_compile_args.append("-O0")
# Build for at least macOS 10.9 when compiling on a 10.9 system or above,
# overriding CPython distuitls behaviour which is to target the version that
# python was built for. This may be overridden by setting
# MACOSX_DEPLOYMENT_TARGET before calling setup.py
if is_platform_mac():
if "MACOSX_DEPLOYMENT_TARGET" not in os.environ:
current_system = platform.mac_ver()[0]
python_target = get_config_vars().get(
"MACOSX_DEPLOYMENT_TARGET", current_system
)
target_macos_version = "10.9"
parsed_macos_version = parse_version(target_macos_version)
if (
parse_version(str(python_target))
< parsed_macos_version
<= parse_version(current_system)
):
os.environ["MACOSX_DEPLOYMENT_TARGET"] = target_macos_version
if sys.version_info[:2] == (3, 8): # GH 33239
extra_compile_args.append("-Wno-error=deprecated-declarations")
# https://github.com/pandas-dev/pandas/issues/35559
extra_compile_args.append("-Wno-error=unreachable-code")
# enable coverage by building cython files by setting the environment variable
# "PANDAS_CYTHON_COVERAGE" (with a Truthy value) or by running build_ext
# with `--with-cython-coverage`enabled
linetrace = os.environ.get("PANDAS_CYTHON_COVERAGE", False) # noqa: PLW1508
if "--with-cython-coverage" in sys.argv:
linetrace = True
sys.argv.remove("--with-cython-coverage")
# Note: if not using `cythonize`, coverage can be enabled by
# pinning `ext.cython_directives = directives` to each ext in extensions.
# github.com/cython/cython/wiki/enhancements-compilerdirectives#in-setuppy
directives = {"linetrace": False, "language_level": 3, "always_allow_keywords": True}
macros = []
if linetrace:
# https://pypkg.com/pypi/pytest-cython/f/tests/example-project/setup.py
directives["linetrace"] = True
macros = [("CYTHON_TRACE", "1"), ("CYTHON_TRACE_NOGIL", "1")]
# silence build warnings about deprecated API usage
# we can't do anything about these warnings because they stem from
# cython+numpy version mismatches.
macros.append(("NPY_NO_DEPRECATED_API", "0"))
# ----------------------------------------------------------------------
# Specification of Dependencies
# TODO(cython#4518): Need to check to see if e.g. `linetrace` has changed and
# possibly re-compile.
def maybe_cythonize(extensions, *args, **kwargs):
"""
Render tempita templates before calling cythonize. This is skipped for
* clean
* sdist
"""
if "clean" in sys.argv or "sdist" in sys.argv:
# See https://github.com/cython/cython/issues/1495
return extensions
elif not _CYTHON_INSTALLED:
# GH#28836 raise a helfpul error message
if _CYTHON_VERSION:
raise RuntimeError(
f"Cannot cythonize with old Cython version ({_CYTHON_VERSION} "
f"installed, needs {min_cython_ver})"
)
raise RuntimeError("Cannot cythonize without Cython installed.")
# reuse any parallel arguments provided for compilation to cythonize
parser = argparse.ArgumentParser()
parser.add_argument("--parallel", "-j", type=int, default=1)
parsed, _ = parser.parse_known_args()
kwargs["nthreads"] = parsed.parallel
build_ext.render_templates(_pxifiles)
if debugging_symbols_requested:
kwargs["gdb_debug"] = True
return cythonize(extensions, *args, **kwargs)
def srcpath(name=None, suffix=".pyx", subdir="src"):
return pjoin("pandas", subdir, name + suffix)
lib_depends = ["pandas/_libs/include/pandas/parse_helper.h"]
tseries_depends = [
"pandas/_libs/include/pandas/datetime/pd_datetime.h",
]
ext_data = {
"_libs.algos": {
"pyxfile": "_libs/algos",
"depends": _pxi_dep["algos"],
},
"_libs.arrays": {"pyxfile": "_libs/arrays"},
"_libs.groupby": {"pyxfile": "_libs/groupby"},
"_libs.hashing": {"pyxfile": "_libs/hashing", "depends": []},
"_libs.hashtable": {
"pyxfile": "_libs/hashtable",
"depends": (
[
"pandas/_libs/include/pandas/vendored/klib/khash_python.h",
"pandas/_libs/include/pandas/vendored/klib/khash.h",
]
+ _pxi_dep["hashtable"]
),
},
"_libs.index": {
"pyxfile": "_libs/index",
"depends": _pxi_dep["index"],
},
"_libs.indexing": {"pyxfile": "_libs/indexing"},
"_libs.internals": {"pyxfile": "_libs/internals"},
"_libs.interval": {
"pyxfile": "_libs/interval",
"depends": _pxi_dep["interval"],
},
"_libs.join": {"pyxfile": "_libs/join"},
"_libs.lib": {
"pyxfile": "_libs/lib",
"depends": lib_depends + tseries_depends,
},
"_libs.missing": {"pyxfile": "_libs/missing", "depends": tseries_depends},
"_libs.parsers": {
"pyxfile": "_libs/parsers",
"depends": [
"pandas/_libs/src/parser/tokenizer.h",
"pandas/_libs/src/parser/io.h",
"pandas/_libs/src/pd_parser.h",
],
},
"_libs.ops": {"pyxfile": "_libs/ops"},
"_libs.ops_dispatch": {"pyxfile": "_libs/ops_dispatch"},
"_libs.properties": {"pyxfile": "_libs/properties"},
"_libs.reshape": {"pyxfile": "_libs/reshape", "depends": []},
"_libs.sparse": {"pyxfile": "_libs/sparse", "depends": _pxi_dep["sparse"]},
"_libs.tslib": {
"pyxfile": "_libs/tslib",
"depends": tseries_depends,
},
"_libs.tslibs.base": {"pyxfile": "_libs/tslibs/base"},
"_libs.tslibs.ccalendar": {"pyxfile": "_libs/tslibs/ccalendar"},
"_libs.tslibs.dtypes": {"pyxfile": "_libs/tslibs/dtypes"},
"_libs.tslibs.conversion": {
"pyxfile": "_libs/tslibs/conversion",
"depends": tseries_depends,
},
"_libs.tslibs.fields": {
"pyxfile": "_libs/tslibs/fields",
"depends": tseries_depends,
},
"_libs.tslibs.nattype": {"pyxfile": "_libs/tslibs/nattype"},
"_libs.tslibs.np_datetime": {
"pyxfile": "_libs/tslibs/np_datetime",
"depends": tseries_depends,
},
"_libs.tslibs.offsets": {
"pyxfile": "_libs/tslibs/offsets",
"depends": tseries_depends,
},
"_libs.tslibs.parsing": {
"pyxfile": "_libs/tslibs/parsing",
"sources": ["pandas/_libs/src/parser/tokenizer.c"],
},
"_libs.tslibs.period": {
"pyxfile": "_libs/tslibs/period",
"depends": tseries_depends,
},
"_libs.tslibs.strptime": {
"pyxfile": "_libs/tslibs/strptime",
"depends": tseries_depends,
},
"_libs.tslibs.timedeltas": {
"pyxfile": "_libs/tslibs/timedeltas",
"depends": tseries_depends,
},
"_libs.tslibs.timestamps": {
"pyxfile": "_libs/tslibs/timestamps",
"depends": tseries_depends,
},
"_libs.tslibs.timezones": {"pyxfile": "_libs/tslibs/timezones"},
"_libs.tslibs.tzconversion": {
"pyxfile": "_libs/tslibs/tzconversion",
"depends": tseries_depends,
},
"_libs.tslibs.vectorized": {
"pyxfile": "_libs/tslibs/vectorized",
"depends": tseries_depends,
},
"_libs.testing": {"pyxfile": "_libs/testing"},
"_libs.window.aggregations": {
"pyxfile": "_libs/window/aggregations",
"language": "c++",
"suffix": ".cpp",
"depends": ["pandas/_libs/include/pandas/skiplist.h"],
},
"_libs.window.indexers": {"pyxfile": "_libs/window/indexers"},
"_libs.writers": {"pyxfile": "_libs/writers"},
"_libs.sas": {"pyxfile": "_libs/sas"},
"_libs.byteswap": {"pyxfile": "_libs/byteswap"},
}
extensions = []
for name, data in ext_data.items():
source_suffix = suffix if suffix == ".pyx" else data.get("suffix", ".c")
sources = [srcpath(data["pyxfile"], suffix=source_suffix, subdir="")]
sources.extend(data.get("sources", []))
include = ["pandas/_libs/include", numpy.get_include()]
undef_macros = []
if (
sys.platform == "zos"
and data.get("language") == "c++"
and os.path.basename(os.environ.get("CXX", "/bin/xlc++")) in ("xlc", "xlc++")
):
data.get("macros", macros).append(("__s390__", "1"))
extra_compile_args.append("-qlanglvl=extended0x:nolibext")
undef_macros.append("_POSIX_THREADS")
obj = Extension(
f"pandas.{name}",
sources=sources,
depends=data.get("depends", []),
include_dirs=include,
language=data.get("language", "c"),
define_macros=data.get("macros", macros),
extra_compile_args=extra_compile_args,
extra_link_args=extra_link_args,
undef_macros=undef_macros,
)
extensions.append(obj)
# ----------------------------------------------------------------------
# ujson
if suffix == ".pyx":
# undo dumb setuptools bug clobbering .pyx sources back to .c
for ext in extensions:
if ext.sources[0].endswith((".c", ".cpp")):
root, _ = os.path.splitext(ext.sources[0])
ext.sources[0] = root + suffix
ujson_ext = Extension(
"pandas._libs.json",
depends=[
"pandas/_libs/include/pandas/vendored/ujson/lib/ultrajson.h",
"pandas/_libs/include/pandas/datetime/pd_datetime.h",
],
sources=(
[
"pandas/_libs/src/vendored/ujson/python/ujson.c",
"pandas/_libs/src/vendored/ujson/python/objToJSON.c",
"pandas/_libs/src/vendored/ujson/python/JSONtoObj.c",
"pandas/_libs/src/vendored/ujson/lib/ultrajsonenc.c",
"pandas/_libs/src/vendored/ujson/lib/ultrajsondec.c",
]
),
include_dirs=[
"pandas/_libs/include",
numpy.get_include(),
],
extra_compile_args=(extra_compile_args),
extra_link_args=extra_link_args,
define_macros=macros,
)
extensions.append(ujson_ext)
# ----------------------------------------------------------------------
# ----------------------------------------------------------------------
# pd_datetime
pd_dt_ext = Extension(
"pandas._libs.pandas_datetime",
depends=["pandas/_libs/tslibs/datetime/pd_datetime.h"],
sources=(
[
"pandas/_libs/src/vendored/numpy/datetime/np_datetime.c",
"pandas/_libs/src/vendored/numpy/datetime/np_datetime_strings.c",
"pandas/_libs/src/datetime/date_conversions.c",
"pandas/_libs/src/datetime/pd_datetime.c",
]
),
include_dirs=[
"pandas/_libs/include",
numpy.get_include(),
],
extra_compile_args=(extra_compile_args),
extra_link_args=extra_link_args,
define_macros=macros,
)
extensions.append(pd_dt_ext)
# ----------------------------------------------------------------------
# ----------------------------------------------------------------------
# pd_datetime
pd_parser_ext = Extension(
"pandas._libs.pandas_parser",
depends=["pandas/_libs/include/pandas/parser/pd_parser.h"],
sources=(
[
"pandas/_libs/src/parser/tokenizer.c",
"pandas/_libs/src/parser/io.c",
"pandas/_libs/src/parser/pd_parser.c",
]
),
include_dirs=[
"pandas/_libs/include",
],
extra_compile_args=(extra_compile_args),
extra_link_args=extra_link_args,
define_macros=macros,
)
extensions.append(pd_parser_ext)
# ----------------------------------------------------------------------
if __name__ == "__main__":
# Freeze to support parallel compilation when using spawn instead of fork
multiprocessing.freeze_support()
setup(
version=versioneer.get_version(),
ext_modules=maybe_cythonize(extensions, compiler_directives=directives),
cmdclass=cmdclass,
)
|
https://github.com/biopython/biopython
|
README.md
|
.. image:: https://img.shields.io/pypi/v/biopython.svg?logo=pypi
:alt: Biopython on the Python Package Index (PyPI)
:target: https://pypi.python.org/pypi/biopython
.. image:: https://img.shields.io/conda/vn/conda-forge/biopython.svg?logo=conda-forge
:alt: Biopython on the Conda package conda-forge channel
:target: https://anaconda.org/conda-forge/biopython
.. image:: https://results.pre-commit.ci/badge/github/biopython/biopython/master.svg
:target: https://results.pre-commit.ci/latest/github/biopython/biopython/master
:alt: pre-commit.ci status
.. image:: https://img.shields.io/circleci/build/github/biopython/biopython.svg?logo=circleci
:alt: Linux testing with CircleCI
:target: https://app.circleci.com/pipelines/github/biopython/biopython
.. image:: https://img.shields.io/appveyor/ci/biopython/biopython/master.svg?logo=appveyor
:alt: Windows testing with AppVeyor
:target: https://ci.appveyor.com/project/biopython/biopython/history
.. image:: https://img.shields.io/github/actions/workflow/status/biopython/biopython/ci.yml?logo=github-actions
:alt: GitHub workflow status
:target: https://github.com/biopython/biopython/actions
.. image:: https://img.shields.io/codecov/c/github/biopython/biopython/master.svg?logo=codecov
:alt: Test coverage on CodeCov
:target: https://codecov.io/github/biopython/biopython/
.. image:: http://depsy.org/api/package/pypi/biopython/badge.svg
:alt: Research software impact on Depsy
:target: http://depsy.org/package/python/biopython
.. image:: https://github.com/biopython/biopython/raw/master/Doc/images/biopython_logo_m.png
:alt: The Biopython Project
:target: http://biopython.org
Biopython README file
=====================
The Biopython Project is an international association of developers of freely
available Python tools for computational molecular biology.
This README file is intended primarily for people interested in working
with the Biopython source code, either one of the releases from the
http://biopython.org website, or from our repository on GitHub
https://github.com/biopython/biopython
Our user-centric documentation, `The Biopython Tutorial and Cookbook, and API
documentation <https://biopython.org/docs/latest/>`_, is generated from our
repository using Sphinx.
The `NEWS <https://github.com/biopython/biopython/blob/master/NEWS.rst>`_
file summarises the changes in each release of Biopython, alongside the
`DEPRECATED
<https://github.com/biopython/biopython/blob/master/DEPRECATED.rst>`_
file which notes API breakages.
The Biopython package is open source software made available under generous
terms. Please see the `LICENSE
<https://github.com/biopython/biopython/blob/master/LICENSE.rst>`_ file for
further details.
If you use Biopython in work contributing to a scientific publication, we ask
that you cite our application note (below) or one of the module specific
publications (listed on our website):
Cock, P.J.A. et al. Biopython: freely available Python tools for computational
molecular biology and bioinformatics. Bioinformatics 2009 Jun 1; 25(11) 1422-3
https://doi.org/10.1093/bioinformatics/btp163 pmid:19304878
For the impatient
=================
Python includes the package management system "pip" which should allow you to
install Biopython (and its dependency NumPy if needed), upgrade or uninstall
with just one terminal command::
pip install biopython
pip install --upgrade biopython
pip uninstall biopython
Since Biopython 1.70 we have provided pre-compiled binary wheel packages on
PyPI for Linux, macOS and Windows. This means pip install should be quick,
and not require a compiler.
As a developer or potential contributor, you may wish to download, build and
install Biopython yourself. This is described below.
Python Requirements
===================
We currently recommend using Python 3.11 from http://www.python.org
Biopython is currently supported and tested on the following Python
implementations:
- Python 3.10, 3.11, 3.12 and 3.13 -- see http://www.python.org
- PyPy3.10 v7.3.17 -- or later, see http://www.pypy.org
Optional Dependencies
=====================
Biopython requires NumPy (see http://www.numpy.org) which will be installed
automatically if you install Biopython with pip (see below for compiling
Biopython yourself).
Depending on which parts of Biopython you plan to use, there are a number of
other optional Python dependencies, which can be installed later if needed:
- ReportLab, see http://www.reportlab.com/opensource/ (optional)
This package is only used in ``Bio.Graphics``, so if you do not need this
functionality, you will not need to install this package.
- matplotlib, see http://matplotlib.org/ (optional)
``Bio.Phylo`` uses this package to plot phylogenetic trees.
- networkx, see https://networkx.github.io/ (optional) and
pygraphviz or pydot, see https://pygraphviz.github.io/ and
http://code.google.com/p/pydot/ (optional)
These packages are used for certain niche functions in ``Bio.Phylo``.
- rdflib, see https://github.com/RDFLib/rdflib (optional)
This package is used in the CDAO parser under ``Bio.Phylo``.
- psycopg2, see http://initd.org/psycopg/ (optional) or
PyGreSQL (pgdb), see http://www.pygresql.org/ (optional)
These packages are used by ``BioSQL`` to access a PostgreSQL database.
- MySQL Connector/Python, see http://dev.mysql.com/downloads/connector/python/
This package is used by ``BioSQL`` to access a MySQL database, and is
supported on PyPy too.
- mysqlclient, see https://github.com/PyMySQL/mysqlclient-python (optional)
This is a fork of the older MySQLdb and is used by ``BioSQL`` to access a
MySQL database. It is supported by PyPy.
In addition there are a number of useful third party tools you may wish to
install such as standalone NCBI BLAST, EMBOSS or ClustalW.
Installation From Source
========================
We recommend using the pre-compiled binary wheels available on PyPI using::
pip install biopython
However, if you need to compile Biopython yourself, the following are required
at compile time:
- Python including development header files like ``python.h``, which on Linux
are often not installed by default (trying looking for and installing a
package named ``python-dev`` or ``python-devel`` as well as the ``python``
package).
- Appropriate C compiler for your version of Python, for example GCC on Linux,
MSVC on Windows. For Mac OS X, or as it is now branded, macOS, use Apple's
command line tools, which can be installed with the terminal command::
xcode-select --install
This will offer to install Apple's XCode development suite - you can, but it
is not needed and takes a lot of disk space.
Then either download and decompress our source code, or fetch it using git.
Now change directory to the Biopython source code folder and run::
pip install -e .
python setup.py test
sudo python setup.py install
Substitute ``python`` with your specific version if required, for example
``python3``, or ``pypy3``.
To exclude tests that require an internet connection (and which may take a
long time), use the ``--offline`` option::
python setup.py test --offline
If you need to do additional configuration, e.g. changing the install
directory prefix, please type ``python setup.py``.
Testing
=======
Biopython includes a suite of regression tests to check if everything is
running correctly. To run the tests, go to the biopython source code
directory and type::
pip install -e .
python setup.py test
If you want to skip the online tests (which is recommended when doing repeated
testing), use::
python setup.py test --offline
Do not panic if you see messages warning of skipped tests::
test_DocSQL ... skipping. Install MySQLdb if you want to use Bio.DocSQL.
This most likely means that a package is not installed. You can
ignore this if it occurs in the tests for a module that you were not
planning on using. If you did want to use that module, please install
the required dependency and re-run the tests.
Some of the tests may fail due to network issues, this is often down to
chance or a service outage. If the problem does not go away on
re-running the tests, you can use the ``--offline`` option.
There is more testing information in the Biopython Tutorial & Cookbook.
Experimental code
=================
Biopython 1.61 introduced a new warning, ``Bio.BiopythonExperimentalWarning``,
which is used to mark any experimental code included in the otherwise
stable Biopython releases. Such 'beta' level code is ready for wider
testing, but still likely to change, and should only be tried by early
adopters in order to give feedback via the biopython-dev mailing list.
We'd expect such experimental code to reach stable status within one or two
releases, at which point our normal policies about trying to preserve
backwards compatibility would apply.
Bugs
====
While we try to ship a robust package, bugs inevitably pop up. If you are
having problems that might be caused by a bug in Biopython, it is possible
that it has already been identified. Update to the latest release if you are
not using it already, and retry. If the problem persists, please search our
bug database and our mailing lists to see if it has already been reported
(and hopefully fixed), and if not please do report the bug. We can't fix
problems we don't know about ;)
Issue tracker: https://github.com/biopython/biopython/issues
If you suspect the problem lies within a parser, it is likely that the data
format has changed and broken the parsing code. (The text BLAST and GenBank
formats seem to be particularly fragile.) Thus, the parsing code in
Biopython is sometimes updated faster than we can build Biopython releases.
You can get the most recent parser by pulling the relevant files (e.g. the
ones in ``Bio.SeqIO`` or ``Bio.Blast``) from our git repository. However, be
careful when doing this, because the code in github is not as well-tested
as released code, and may contain new dependencies.
In any bug report, please let us know:
1. Which operating system and hardware (32 bit or 64 bit) you are using
2. Python version
3. Biopython version (or git commit/date)
4. Traceback that occurs (the full error message)
And also ideally:
5. Example code that breaks
6. A data file that causes the problem
Contributing, Bug Reports
=========================
Biopython is run by volunteers from all over the world, with many types of
backgrounds. We are always looking for people interested in helping with code
development, web-site management, documentation writing, technical
administration, and whatever else comes up.
If you wish to contribute, please first read `CONTRIBUTING.rst
<https://github.com/biopython/biopython/blob/master/CONTRIBUTING.rst>`_ here,
visit our web site http://biopython.org and join our mailing list:
http://biopython.org/wiki/Mailing_lists
Distribution Structure
======================
- ``README.rst`` -- This file.
- ``NEWS.rst`` -- Release notes and news.
- ``LICENSE.rst`` -- What you can do with the code.
- ``CONTRIB.rst`` -- An (incomplete) list of people who helped Biopython in
one way or another.
- ``CONTRIBUTING.rst`` -- An overview about how to contribute to Biopython.
- ``DEPRECATED.rst`` -- Contains information about modules in Biopython that
were removed or no longer recommended for use, and how to update code that
uses those modules.
- ``MANIFEST.in`` -- Configures which files to include in releases.
- ``setup.py`` -- Installation file.
- ``Bio/`` -- The main code base code.
- ``BioSQL/`` -- Code for using Biopython with BioSQL databases.
- ``Doc/`` -- Documentation.
- ``Scripts/`` -- Miscellaneous, possibly useful, standalone scripts.
- ``Tests/`` -- Regression testing code including sample data files.
|
https://github.com/biopython/biopython
|
setup.py
|
"""setuptools based setup script for Biopython.
This uses setuptools which is now the standard python mechanism for
installing packages. If you have downloaded and uncompressed the
Biopython source code, or fetched it from git, for the simplest
installation just type the command::
python setup.py install
However, you would normally install the latest Biopython release from
the PyPI archive with::
pip install biopython
For more in-depth instructions, see the installation section of the
Biopython manual, linked to from:
http://biopython.org/wiki/Documentation
Or, if all else fails, feel free to write to the sign up to the Biopython
mailing list and ask for help. See:
http://biopython.org/wiki/Mailing_lists
"""
import ast
import os
import sys
try:
from setuptools import __version__ as setuptools_version
from setuptools import Command
from setuptools import Extension
from setuptools import setup
except ImportError:
sys.exit(
"We need the Python library setuptools to be installed. "
"Try running: python -m ensurepip"
)
setuptools_version_tuple = tuple(int(x) for x in setuptools_version.split(".")[:2])
if setuptools_version_tuple < (70, 1) and "bdist_wheel" in sys.argv:
# Check for presence of wheel in setuptools < 70.1
# Before setuptools 70.1, wheel is needed to make a bdist_wheel.
# Since 70.1 was released including
# https://github.com/pypa/setuptools/pull/4369,
# it is not needed.
try:
import wheel # noqa: F401
except ImportError:
sys.exit(
"We need both setuptools AND wheel packages installed "
"for bdist_wheel to work. Try running: pip install wheel"
)
# Make sure we have the right Python version.
MIN_PY_VER = (3, 10)
if sys.version_info[:2] < MIN_PY_VER:
sys.stderr.write(
("ERROR: Biopython requires Python %i.%i or later. " % MIN_PY_VER)
+ ("Python %d.%d detected.\n" % sys.version_info[:2])
)
sys.exit(1)
class test_biopython(Command):
"""Run all of the tests for the package.
This is a automatic test run class to make distutils kind of act like
perl. With this you can do:
python setup.py build
python setup.py install
python setup.py test
"""
description = "Automatically run the test suite for Biopython."
user_options = [("offline", None, "Don't run online tests")]
def initialize_options(self):
"""No-op, initialise options."""
self.offline = None
def finalize_options(self):
"""No-op, finalise options."""
pass
def run(self):
"""Run the tests."""
this_dir = os.getcwd()
# change to the test dir and run the tests
os.chdir("Tests")
sys.path.insert(0, "")
import run_tests
if self.offline:
run_tests.main(["--offline"])
else:
run_tests.main([])
# change back to the current directory
os.chdir(this_dir)
def can_import(module_name):
"""Check we can import the requested module."""
try:
return __import__(module_name)
except ImportError:
return None
# Using requirements.txt is preferred for an application
# (and likely will pin specific version numbers), using
# setup.py's install_requires is preferred for a library
# (and should try not to be overly narrow with versions).
REQUIRES = ["numpy"]
# --- set up the packages we are going to install
# standard biopython packages
PACKAGES = [
"Bio",
"Bio.Affy",
"Bio.Align",
"Bio.Align.substitution_matrices",
"Bio.Align.substitution_matrices.data",
"Bio.AlignIO",
"Bio.Alphabet",
"Bio.Blast",
"Bio.CAPS",
"Bio.Cluster",
"Bio.codonalign",
"Bio.Compass",
"Bio.Data",
"Bio.Emboss",
"Bio.Entrez",
"Bio.Entrez.DTDs",
"Bio.Entrez.XSDs",
"Bio.ExPASy",
"Bio.GenBank",
"Bio.Geo",
"Bio.Graphics",
"Bio.Graphics.GenomeDiagram",
"Bio.HMM",
"Bio.KEGG",
"Bio.KEGG.Compound",
"Bio.KEGG.Enzyme",
"Bio.KEGG.Gene",
"Bio.KEGG.Map",
"Bio.PDB.mmtf",
"Bio.KEGG.KGML",
"Bio.Medline",
"Bio.motifs",
"Bio.motifs.jaspar",
"Bio.Nexus",
"Bio.NMR",
"Bio.Pathway",
"Bio.Pathway.Rep",
"Bio.PDB",
"Bio.phenotype",
"Bio.PopGen",
"Bio.PopGen.GenePop",
"Bio.Restriction",
"Bio.SCOP",
"Bio.SearchIO",
"Bio.SearchIO._model",
"Bio.SearchIO.BlastIO",
"Bio.SearchIO.HHsuiteIO",
"Bio.SearchIO.HmmerIO",
"Bio.SearchIO.InfernalIO",
"Bio.SearchIO.ExonerateIO",
"Bio.SearchIO.InterproscanIO",
"Bio.SeqIO",
"Bio.SeqUtils",
"Bio.Sequencing",
"Bio.SVDSuperimposer",
"Bio.SwissProt",
"Bio.TogoWS",
"Bio.Phylo",
"Bio.Phylo.PAML",
"Bio.UniGene",
"Bio.UniProt",
# Other top level packages,
"BioSQL",
]
EXTENSIONS = [
Extension("Bio.Align._codonaligner", ["Bio/Align/_codonaligner.c"]),
Extension("Bio.Align._pairwisealigner", ["Bio/Align/_pairwisealigner.c"]),
Extension("Bio.Align._aligncore", ["Bio/Align/_aligncore.c"]),
Extension("Bio.cpairwise2", ["Bio/cpairwise2module.c"]),
Extension("Bio.Nexus.cnexus", ["Bio/Nexus/cnexus.c"]),
Extension("Bio.motifs._pwm", ["Bio/motifs/_pwm.c"]),
Extension(
"Bio.Cluster._cluster",
["Bio/Cluster/cluster.c", "Bio/Cluster/clustermodule.c"],
extra_compile_args=["-DCLUSTER_USE_PYTHON_MEMORY"],
),
Extension("Bio.PDB.ccealign", ["Bio/PDB/ccealignmodule.c"]),
Extension("Bio.PDB.kdtrees", ["Bio/PDB/kdtrees.c"]),
Extension("Bio.PDB._bcif_helper", ["Bio/PDB/bcifhelpermodule.c"]),
Extension("Bio.SeqIO._twoBitIO", ["Bio/SeqIO/_twoBitIO.c"]),
]
def get_version():
"""Get version number from __init__.py."""
for line in open("Bio/__init__.py"):
if line.startswith("__version__ = "):
return ast.literal_eval(line.split("=")[1].strip())
return "Undefined"
__version__ = get_version()
# We now load in our reStructuredText README.rst file to pass explicitly in the
# metadata, since at time of writing PyPI did not do this for us.
#
# Must make encoding explicit to avoid any conflict with the local default.
# Currently keeping README as ASCII (might switch to UTF8 later if needed).
# If any invalid character does appear in README, this will fail and alert us.
with open("README.rst", encoding="ascii") as handle:
readme_rst = handle.read()
setup(
name="biopython",
version=__version__,
author="The Biopython Contributors",
author_email="[email protected]",
url="https://biopython.org/",
description="Freely available tools for computational molecular biology.",
long_description=readme_rst,
project_urls={
"Documentation": "https://biopython.org/wiki/Documentation",
"Source": "https://github.com/biopython/biopython/",
"Tracker": "https://github.com/biopython/biopython/issues",
},
classifiers=[
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"Intended Audience :: Science/Research",
"License :: Freely Distributable",
# Technically the "Biopython License Agreement" is not OSI approved,
# but is almost https://opensource.org/licenses/HPND so might put:
# 'License :: OSI Approved',
# To resolve this we are moving to dual-licensing with 3-clause BSD:
# 'License :: OSI Approved :: BSD License',
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Topic :: Software Development :: Libraries :: Python Modules",
],
cmdclass={"test": test_biopython},
packages=PACKAGES,
ext_modules=EXTENSIONS,
include_package_data=True, # done via MANIFEST.in under setuptools
install_requires=REQUIRES,
python_requires=">=%i.%i" % MIN_PY_VER,
)
|
https://github.com/scipy/scipy
|
README.md
|
.. image:: https://raw.githubusercontent.com/scipy/scipy/main/doc/source/_static/logo.svg
:target: https://scipy.org
:width: 110
:height: 110
:align: left
.. image:: https://img.shields.io/badge/powered%20by-NumFOCUS-orange.svg?style=flat&colorA=E1523D&colorB=007D8A
:target: https://numfocus.org
.. image:: https://img.shields.io/pypi/dm/scipy.svg?label=Pypi%20downloads
:target: https://pypi.org/project/scipy/
.. image:: https://img.shields.io/conda/dn/conda-forge/scipy.svg?label=Conda%20downloads
:target: https://anaconda.org/conda-forge/scipy
.. image:: https://img.shields.io/badge/stackoverflow-Ask%20questions-blue.svg
:target: https://stackoverflow.com/questions/tagged/scipy
.. image:: https://img.shields.io/badge/DOI-10.1038%2Fs41592--019--0686--2-blue.svg
:target: https://www.nature.com/articles/s41592-019-0686-2
SciPy (pronounced "Sigh Pie") is an open-source software for mathematics,
science, and engineering. It includes modules for statistics, optimization,
integration, linear algebra, Fourier transforms, signal and image processing,
ODE solvers, and more.
- **Website:** https://scipy.org
- **Documentation:** https://docs.scipy.org/doc/scipy/
- **Development version of the documentation:** https://scipy.github.io/devdocs
- **SciPy development forum:** https://discuss.scientific-python.org/c/contributor/scipy
- **Stack Overflow:** https://stackoverflow.com/questions/tagged/scipy
- **Source code:** https://github.com/scipy/scipy
- **Contributing:** https://scipy.github.io/devdocs/dev/index.html
- **Bug reports:** https://github.com/scipy/scipy/issues
- **Code of Conduct:** https://docs.scipy.org/doc/scipy/dev/conduct/code_of_conduct.html
- **Report a security vulnerability:** https://tidelift.com/docs/security
- **Citing in your work:** https://www.scipy.org/citing-scipy/
SciPy is built to work with
NumPy arrays, and provides many user-friendly and efficient numerical routines,
such as routines for numerical integration and optimization. Together, they
run on all popular operating systems, are quick to install, and are free of
charge. NumPy and SciPy are easy to use, but powerful enough to be depended
upon by some of the world's leading scientists and engineers. If you need to
manipulate numbers on a computer and display or publish the results, give
SciPy a try!
For the installation instructions, see `our install
guide <https://scipy.org/install/>`__.
Call for Contributions
----------------------
We appreciate and welcome contributions. Small improvements or fixes are always appreciated; issues labeled as "good
first issue" may be a good starting point. Have a look at `our contributing
guide <https://scipy.github.io/devdocs/dev/index.html>`__.
Writing code isn’t the only way to contribute to SciPy. You can also:
- review pull requests
- triage issues
- develop tutorials, presentations, and other educational materials
- maintain and improve `our website <https://github.com/scipy/scipy.org>`__
- develop graphic design for our brand assets and promotional materials
- help with outreach and onboard new contributors
- write grant proposals and help with other fundraising efforts
If you’re unsure where to start or how your skills fit in, reach out! You can
ask on the `forum <https://discuss.scientific-python.org/c/contributor/scipy>`__
or here, on GitHub, by leaving a comment on a relevant issue that is already
open.
If you are new to contributing to open source, `this
guide <https://opensource.guide/how-to-contribute/>`__ helps explain why, what,
and how to get involved.
|
https://github.com/sympy/sympy
|
README.md
|
# SymPy
[](https://pypi.python.org/pypi/sympy)
[](https://gitter.im/sympy/sympy?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[](https://zenodo.org/badge/latestdoi/18918/sympy/sympy)
[](https://pepy.tech/project/sympy)
[](https://github.com/sympy/sympy/issues)
[](https://git-scm.com/book/en/v2/GitHub-Contributing-to-a-Project)
[](https://numfocus.org)
[](https://github.com/sympy/sympy/releases)
[](https://sympy.org/)
See the [AUTHORS](https://github.com/sympy/sympy/blob/master/AUTHORS) file for the list of authors.
And many more people helped on the SymPy mailing list, reported bugs,
helped organize SymPy's participation in the Google Summer of Code, the
Google Highly Open Participation Contest, Google Code-In, wrote and
blogged about SymPy...
License: New BSD License (see the [LICENSE](https://github.com/sympy/sympy/blob/master/LICENSE) file for details) covers all
files in the sympy repository unless stated otherwise.
Our mailing list is at
<https://groups.google.com/forum/?fromgroups#!forum/sympy>.
We have a community chat at [Gitter](https://gitter.im/sympy/sympy). Feel
free to ask us anything there. We have a very welcoming and helpful
community.
## Download
The recommended installation method is through Anaconda,
<https://www.anaconda.com/products/distribution>
You can also get the latest version of SymPy from
<https://pypi.python.org/pypi/sympy/>
To get the git version do
$ git clone https://github.com/sympy/sympy.git
For other options (tarballs, debs, etc.), see
<https://docs.sympy.org/dev/install.html>.
## Documentation and Usage
For in-depth instructions on installation and building the
documentation, see the [SymPy Documentation Style Guide](https://docs.sympy.org/dev/documentation-style-guide.html).
Everything is at:
<https://docs.sympy.org/>
You can generate everything at the above site in your local copy of
SymPy by:
$ cd doc
$ make html
Then the docs will be in <span class="title-ref">\_build/html</span>. If
you don't want to read that, here is a short usage:
From this directory, start Python and:
``` python
>>> from sympy import Symbol, cos
>>> x = Symbol('x')
>>> e = 1/cos(x)
>>> print(e.series(x, 0, 10))
1 + x**2/2 + 5*x**4/24 + 61*x**6/720 + 277*x**8/8064 + O(x**10)
```
SymPy also comes with a console that is a simple wrapper around the
classic python console (or IPython when available) that loads the SymPy
namespace and executes some common commands for you.
To start it, issue:
$ bin/isympy
from this directory, if SymPy is not installed or simply:
$ isympy
if SymPy is installed.
## Installation
To install SymPy using PyPI, run the following command:
$ pip install sympy
To install SymPy using Anaconda, run the following command:
$ conda install -c anaconda sympy
To install SymPy from GitHub source, first clone SymPy using `git`:
$ git clone https://github.com/sympy/sympy.git
Then, in the `sympy` repository that you cloned, simply run:
$ pip install .
See <https://docs.sympy.org/dev/install.html> for more information.
## Contributing
We welcome contributions from anyone, even if you are new to open
source. Please read our [Introduction to Contributing](https://docs.sympy.org/dev/contributing/introduction-to-contributing.html)
page and the [SymPy Documentation Style Guide](https://docs.sympy.org/dev/documentation-style-guide.html). If you
are new and looking for some way to contribute, a good place to start is
to look at the issues tagged [Easy to Fix](https://github.com/sympy/sympy/issues?q=is%3Aopen+is%3Aissue+label%3A%22Easy+to+Fix%22).
Please note that all participants in this project are expected to follow
our Code of Conduct. By participating in this project you agree to abide
by its terms. See [CODE\_OF\_CONDUCT.md](CODE_OF_CONDUCT.md).
## Tests
To execute all tests, run:
$./setup.py test
in the current directory.
For the more fine-grained running of tests or doctests, use `bin/test`
or respectively `bin/doctest`. The master branch is automatically tested
by GitHub Actions.
To test pull requests, use
[sympy-bot](https://github.com/sympy/sympy-bot).
## Regenerate Experimental <span class="title-ref">LaTeX</span> Parser/Lexer
The parser and lexer were generated with the [ANTLR4](http://antlr4.org)
toolchain in `sympy/parsing/latex/_antlr` and checked into the repo.
Presently, most users should not need to regenerate these files, but
if you plan to work on this feature, you will need the `antlr4`
command-line tool (and you must ensure that it is in your `PATH`).
One way to get it is:
$ conda install -c conda-forge antlr=4.11.1
Alternatively, follow the instructions on the ANTLR website and download
the `antlr-4.11.1-complete.jar`. Then export the `CLASSPATH` as instructed
and instead of creating `antlr4` as an alias, make it an executable file
with the following contents:
``` bash
#!/bin/bash
java -jar /usr/local/lib/antlr-4.11.1-complete.jar "$@"
```
After making changes to `sympy/parsing/latex/LaTeX.g4`, run:
$ ./setup.py antlr
## Clean
To clean everything (thus getting the same tree as in the repository):
$ git clean -Xdf
which will clear everything ignored by `.gitignore`, and:
$ git clean -df
to clear all untracked files. You can revert the most recent changes in
git with:
$ git reset --hard
WARNING: The above commands will all clear changes you may have made,
and you will lose them forever. Be sure to check things with `git
status`, `git diff`, `git clean -Xn`, and `git clean -n` before doing any
of those.
## Bugs
Our issue tracker is at <https://github.com/sympy/sympy/issues>. Please
report any bugs that you find. Or, even better, fork the repository on
GitHub and create a pull request. We welcome all changes, big or small,
and we will help you make the pull request if you are new to git (just
ask on our mailing list or Gitter Channel). If you further have any queries, you can find answers
on Stack Overflow using the [sympy](https://stackoverflow.com/questions/tagged/sympy) tag.
## Brief History
SymPy was started by Ondřej Čertík in 2005, he wrote some code during
the summer, then he wrote some more code during summer 2006. In February
2007, Fabian Pedregosa joined the project and helped fix many things,
contributed documentation, and made it alive again. 5 students (Mateusz
Paprocki, Brian Jorgensen, Jason Gedge, Robert Schwarz, and Chris Wu)
improved SymPy incredibly during summer 2007 as part of the Google
Summer of Code. Pearu Peterson joined the development during the summer
2007 and he has made SymPy much more competitive by rewriting the core
from scratch, which has made it from 10x to 100x faster. Jurjen N.E. Bos
has contributed pretty-printing and other patches. Fredrik Johansson has
written mpmath and contributed a lot of patches.
SymPy has participated in every Google Summer of Code since 2007. You
can see <https://github.com/sympy/sympy/wiki#google-summer-of-code> for
full details. Each year has improved SymPy by bounds. Most of SymPy's
development has come from Google Summer of Code students.
In 2011, Ondřej Čertík stepped down as lead developer, with Aaron
Meurer, who also started as a Google Summer of Code student, taking his
place. Ondřej Čertík is still active in the community but is too busy
with work and family to play a lead development role.
Since then, a lot more people have joined the development and some
people have also left. You can see the full list in doc/src/aboutus.rst,
or online at:
<https://docs.sympy.org/dev/aboutus.html#sympy-development-team>
The git history goes back to 2007 when development moved from svn to hg.
To see the history before that point, look at
<https://github.com/sympy/sympy-old>.
You can use git to see the biggest developers. The command:
$ git shortlog -ns
will show each developer, sorted by commits to the project. The command:
$ git shortlog -ns --since="1 year"
will show the top developers from the last year.
## Citation
To cite SymPy in publications use
> Meurer A, Smith CP, Paprocki M, Čertík O, Kirpichev SB, Rocklin M,
> Kumar A, Ivanov S, Moore JK, Singh S, Rathnayake T, Vig S, Granger BE,
> Muller RP, Bonazzi F, Gupta H, Vats S, Johansson F, Pedregosa F, Curry
> MJ, Terrel AR, Roučka Š, Saboo A, Fernando I, Kulal S, Cimrman R,
> Scopatz A. (2017) SymPy: symbolic computing in Python. *PeerJ Computer
> Science* 3:e103 <https://doi.org/10.7717/peerj-cs.103>
A BibTeX entry for LaTeX users is
``` bibtex
@article{10.7717/peerj-cs.103,
title = {SymPy: symbolic computing in Python},
author = {Meurer, Aaron and Smith, Christopher P. and Paprocki, Mateusz and \v{C}ert\'{i}k, Ond\v{r}ej and Kirpichev, Sergey B. and Rocklin, Matthew and Kumar, Amit and Ivanov, Sergiu and Moore, Jason K. and Singh, Sartaj and Rathnayake, Thilina and Vig, Sean and Granger, Brian E. and Muller, Richard P. and Bonazzi, Francesco and Gupta, Harsh and Vats, Shivam and Johansson, Fredrik and Pedregosa, Fabian and Curry, Matthew J. and Terrel, Andy R. and Rou\v{c}ka, \v{S}t\v{e}p\'{a}n and Saboo, Ashutosh and Fernando, Isuru and Kulal, Sumith and Cimrman, Robert and Scopatz, Anthony},
year = 2017,
month = Jan,
keywords = {Python, Computer algebra system, Symbolics},
abstract = {
SymPy is an open-source computer algebra system written in pure Python. It is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. These characteristics have led SymPy to become a popular symbolic library for the scientific Python ecosystem. This paper presents the architecture of SymPy, a description of its features, and a discussion of select submodules. The supplementary material provides additional examples and further outlines details of the architecture and features of SymPy.
},
volume = 3,
pages = {e103},
journal = {PeerJ Computer Science},
issn = {2376-5992},
url = {https://doi.org/10.7717/peerj-cs.103},
doi = {10.7717/peerj-cs.103}
}
```
SymPy is BSD licensed, so you are free to use it whatever you like, be
it academic, commercial, creating forks or derivatives, as long as you
copy the BSD statement if you redistribute it (see the LICENSE file for
details). That said, although not required by the SymPy license, if it
is convenient for you, please cite SymPy when using it in your work and
also consider contributing all your changes back, so that we can
incorporate it and all of us will benefit in the end.
|
https://github.com/sympy/sympy
|
conftest.py
|
# -*- coding: utf-8 -*-
from __future__ import print_function, division, absolute_import
import os
from itertools import chain
import json
import sys
import warnings
import pytest
from sympy.testing.runtests import setup_pprint, _get_doctest_blacklist
durations_path = os.path.join(os.path.dirname(__file__), '.ci', 'durations.json')
blacklist_path = os.path.join(os.path.dirname(__file__), '.ci', 'blacklisted.json')
collect_ignore = _get_doctest_blacklist()
# Set up printing for doctests
setup_pprint(disable_line_wrap=False)
sys.__displayhook__ = sys.displayhook
#from sympy import pprint_use_unicode
#pprint_use_unicode(False)
def _mk_group(group_dict):
return list(chain(*[[k+'::'+v for v in files] for k, files in group_dict.items()]))
if os.path.exists(durations_path):
with open(durations_path, 'rt') as fin:
text = fin.read()
veryslow_group, slow_group = [_mk_group(group_dict) for group_dict in json.loads(text)]
else:
# warnings in conftest has issues: https://github.com/pytest-dev/pytest/issues/2891
warnings.warn("conftest.py:22: Could not find %s, --quickcheck and --veryquickcheck will have no effect.\n" % durations_path)
veryslow_group, slow_group = [], []
if os.path.exists(blacklist_path):
with open(blacklist_path, 'rt') as stream:
blacklist_group = _mk_group(json.load(stream))
else:
warnings.warn("conftest.py:28: Could not find %s, no tests will be skipped due to blacklisting\n" % blacklist_path)
blacklist_group = []
def pytest_addoption(parser):
parser.addoption("--quickcheck", dest="runquick", action="store_true",
help="Skip very slow tests (see ./ci/parse_durations_log.py)")
parser.addoption("--veryquickcheck", dest="runveryquick", action="store_true",
help="Skip slow & very slow (see ./ci/parse_durations_log.py)")
def pytest_configure(config):
# register an additional marker
config.addinivalue_line("markers", "slow: manually marked test as slow (use .ci/durations.json instead)")
config.addinivalue_line("markers", "quickcheck: skip very slow tests")
config.addinivalue_line("markers", "veryquickcheck: skip slow & very slow tests")
def pytest_runtest_setup(item):
if isinstance(item, pytest.Function):
if item.nodeid in veryslow_group and (item.config.getvalue("runquick") or
item.config.getvalue("runveryquick")):
pytest.skip("very slow test, skipping since --quickcheck or --veryquickcheck was passed.")
return
if item.nodeid in slow_group and item.config.getvalue("runveryquick"):
pytest.skip("slow test, skipping since --veryquickcheck was passed.")
return
if item.nodeid in blacklist_group:
pytest.skip("blacklisted test, see %s" % blacklist_path)
return
|
https://github.com/sympy/sympy
|
isympy.py
|
"""
Python shell for SymPy.
This is just a normal Python shell (IPython shell if you have the
IPython package installed), that executes the following commands for
the user:
>>> from __future__ import division
>>> from sympy import *
>>> x, y, z, t = symbols('x y z t')
>>> k, m, n = symbols('k m n', integer=True)
>>> f, g, h = symbols('f g h', cls=Function)
>>> init_printing()
So starting 'isympy' is equivalent to starting Python (or IPython) and
executing the above commands by hand. It is intended for easy and quick
experimentation with SymPy. isympy is a good way to use SymPy as an
interactive calculator. If you have IPython and Matplotlib installed, then
interactive plotting is enabled by default.
COMMAND LINE OPTIONS
--------------------
-c CONSOLE, --console=CONSOLE
Use the specified shell (Python or IPython) shell as the console
backend instead of the default one (IPython if present, Python
otherwise), e.g.:
$isympy -c python
CONSOLE must be one of 'ipython' or 'python'
-p PRETTY, --pretty PRETTY
Setup pretty-printing in SymPy. When pretty-printing is enabled,
expressions can be printed with Unicode or ASCII. The default is
to use pretty-printing (with Unicode if the terminal supports it).
When this option is 'no', expressions will not be pretty-printed
and ASCII will be used:
$isympy -p no
PRETTY must be one of 'unicode', 'ascii', or 'no'
-t TYPES, --types=TYPES
Setup the ground types for the polys. By default, gmpy ground types
are used if gmpy2 or gmpy is installed, otherwise it falls back to python
ground types, which are a little bit slower. You can manually
choose python ground types even if gmpy is installed (e.g., for
testing purposes):
$isympy -t python
TYPES must be one of 'gmpy', 'gmpy1' or 'python'
Note that the ground type gmpy1 is primarily intended for testing; it
forces the use of gmpy version 1 even if gmpy2 is available.
This is the same as setting the environment variable
SYMPY_GROUND_TYPES to the given ground type (e.g.,
SYMPY_GROUND_TYPES='gmpy')
The ground types can be determined interactively from the variable
sympy.polys.domains.GROUND_TYPES.
-o ORDER, --order ORDER
Setup the ordering of terms for printing. The default is lex, which
orders terms lexicographically (e.g., x**2 + x + 1). You can choose
other orderings, such as rev-lex, which will use reverse
lexicographic ordering (e.g., 1 + x + x**2):
$isympy -o rev-lex
ORDER must be one of 'lex', 'rev-lex', 'grlex', 'rev-grlex',
'grevlex', 'rev-grevlex', 'old', or 'none'.
Note that for very large expressions, ORDER='none' may speed up
printing considerably but the terms will have no canonical order.
-q, --quiet
Print only Python's and SymPy's versions to stdout at startup.
-d, --doctest
Use the same format that should be used for doctests. This is
equivalent to -c python -p no.
-C, --no-cache
Disable the caching mechanism. Disabling the cache may slow certain
operations down considerably. This is useful for testing the cache,
or for benchmarking, as the cache can result in deceptive timings.
This is equivalent to setting the environment variable
SYMPY_USE_CACHE to 'no'.
-a, --auto-symbols (requires at least IPython 0.11)
Automatically create missing symbols. Normally, typing a name of a
Symbol that has not been instantiated first would raise NameError,
but with this option enabled, any undefined name will be
automatically created as a Symbol.
Note that this is intended only for interactive, calculator style
usage. In a script that uses SymPy, Symbols should be instantiated
at the top, so that it's clear what they are.
This will not override any names that are already defined, which
includes the single character letters represented by the mnemonic
QCOSINE (see the "Gotchas and Pitfalls" document in the
documentation). You can delete existing names by executing "del
name". If a name is defined, typing "'name' in dir()" will return True.
The Symbols that are created using this have default assumptions.
If you want to place assumptions on symbols, you should create them
using symbols() or var().
Finally, this only works in the top level namespace. So, for
example, if you define a function in isympy with an undefined
Symbol, it will not work.
See also the -i and -I options.
-i, --int-to-Integer (requires at least IPython 0.11)
Automatically wrap int literals with Integer. This makes it so that
things like 1/2 will come out as Rational(1, 2), rather than 0.5. This
works by preprocessing the source and wrapping all int literals with
Integer. Note that this will not change the behavior of int literals
assigned to variables, and it also won't change the behavior of functions
that return int literals.
If you want an int, you can wrap the literal in int(), e.g. int(3)/int(2)
gives 1.5 (with division imported from __future__).
-I, --interactive (requires at least IPython 0.11)
This is equivalent to --auto-symbols --int-to-Integer. Future options
designed for ease of interactive use may be added to this.
-D, --debug
Enable debugging output. This is the same as setting the
environment variable SYMPY_DEBUG to 'True'. The debug status is set
in the variable SYMPY_DEBUG within isympy.
-- IPython options
Additionally you can pass command line options directly to the IPython
interpreter (the standard Python shell is not supported). However you
need to add the '--' separator between two types of options, e.g the
startup banner option and the colors option. You need to enter the
options as required by the version of IPython that you are using, too:
in IPython 0.11,
$isympy -q -- --colors=NoColor
or older versions of IPython,
$isympy -q -- -colors NoColor
See also isympy --help.
"""
import os
import sys
# DO NOT IMPORT SYMPY HERE! Or the setting of the sympy environment variables
# by the command line will break.
def main() -> None:
from argparse import ArgumentParser, RawDescriptionHelpFormatter
VERSION = None
if '--version' in sys.argv:
# We cannot import sympy before this is run, because flags like -C and
# -t set environment variables that must be set before SymPy is
# imported. The only thing we need to import it for is to get the
# version, which only matters with the --version flag.
import sympy
VERSION = sympy.__version__
usage = 'isympy [options] -- [ipython options]'
parser = ArgumentParser(
usage=usage,
description=__doc__,
formatter_class=RawDescriptionHelpFormatter,
)
parser.add_argument('--version', action='version', version=VERSION)
parser.add_argument(
'-c', '--console',
dest='console',
action='store',
default=None,
choices=['ipython', 'python', 'bpython'],
metavar='CONSOLE',
help='select type of interactive session: ipython | python; defaults '
'to ipython if IPython is installed, otherwise python')
parser.add_argument(
'-p', '--pretty',
dest='pretty',
action='store',
default=None,
metavar='PRETTY',
choices=['unicode', 'ascii', 'no'],
help='setup pretty printing: unicode | ascii | no; defaults to '
'unicode printing if the terminal supports it, otherwise ascii')
parser.add_argument(
'-t', '--types',
dest='types',
action='store',
default=None,
metavar='TYPES',
choices=['gmpy', 'gmpy1', 'python'],
help='setup ground types: gmpy | gmpy1 | python; defaults to gmpy if gmpy2 '
'or gmpy is installed, otherwise python')
parser.add_argument(
'-o', '--order',
dest='order',
action='store',
default=None,
metavar='ORDER',
choices=['lex', 'grlex', 'grevlex', 'rev-lex', 'rev-grlex', 'rev-grevlex', 'old', 'none'],
help='setup ordering of terms: [rev-]lex | [rev-]grlex | [rev-]grevlex | old | none; defaults to lex')
parser.add_argument(
'-q', '--quiet',
dest='quiet',
action='store_true',
default=False,
help='print only version information at startup')
parser.add_argument(
'-d', '--doctest',
dest='doctest',
action='store_true',
default=False,
help='use the doctest format for output (you can just copy and paste it)')
parser.add_argument(
'-C', '--no-cache',
dest='cache',
action='store_false',
default=True,
help='disable caching mechanism')
parser.add_argument(
'-a', '--auto-symbols',
dest='auto_symbols',
action='store_true',
default=False,
help='automatically construct missing symbols')
parser.add_argument(
'-i', '--int-to-Integer',
dest='auto_int_to_Integer',
action='store_true',
default=False,
help="automatically wrap int literals with Integer")
parser.add_argument(
'-I', '--interactive',
dest='interactive',
action='store_true',
default=False,
help="equivalent to -a -i")
parser.add_argument(
'-D', '--debug',
dest='debug',
action='store_true',
default=False,
help='enable debugging output')
(options, ipy_args) = parser.parse_known_args()
if '--' in ipy_args:
ipy_args.remove('--')
if not options.cache:
os.environ['SYMPY_USE_CACHE'] = 'no'
if options.types:
os.environ['SYMPY_GROUND_TYPES'] = options.types
if options.debug:
os.environ['SYMPY_DEBUG'] = str(options.debug)
if options.doctest:
options.pretty = 'no'
options.console = 'python'
session = options.console
from sympy.interactive.session import ConsoleBackend
console_backend = ConsoleBackend.IPYTHON
if session is not None:
if session == "python":
console_backend = ConsoleBackend.PYTHON
elif session == "ipython":
console_backend = ConsoleBackend.IPYTHON
elif session == "bpython":
console_backend = ConsoleBackend.BPYTHON
else:
print("Unknown console name")
return
else:
try:
import IPython # noqa: F401
console_backend = ConsoleBackend.IPYTHON
except ImportError:
if not options.quiet:
from sympy.interactive.session import no_ipython
print(no_ipython)
console_backend = ConsoleBackend.PYTHON
args = {
'pretty_print': True,
'use_unicode': None,
'use_latex': None,
'order': None,
'argv': ipy_args,
}
if options.pretty == 'unicode':
args['use_unicode'] = True
elif options.pretty == 'ascii':
args['use_unicode'] = False
elif options.pretty == 'no':
args['pretty_print'] = False
if options.order is not None:
args['order'] = options.order
args['quiet'] = options.quiet
args['auto_symbols'] = options.auto_symbols or options.interactive
args['auto_int_to_Integer'] = options.auto_int_to_Integer or options.interactive
from sympy.interactive import init_session
init_session(console_backend=console_backend, **args)
if __name__ == "__main__":
main()
|
https://github.com/sympy/sympy
|
setup.py
|
#!/usr/bin/env python
"""Setup script for SymPy.
This uses Setuptools (https://setuptools.pypa.io/en/latest/) the standard
python mechanism for installing packages.
For the easiest installation just type the command (you'll probably need
root privileges for that):
pip install .
This will install the library in the default location. For instructions on
how to customize the installation procedure read the output of:
pip install --help
In addition, there are some other commands:
python setup.py test -> will run the complete test suite
To get a full list of available commands, read the output of:
python setup.py --help-commands
Or, if all else fails, feel free to write to the sympy list at
[email protected] and ask for help.
"""
import sys
import os
import subprocess
from pathlib import Path
from setuptools import setup, Command
from setuptools.command.sdist import sdist
# This directory
dir_setup = os.path.dirname(os.path.realpath(__file__))
extra_kwargs = {
'zip_safe': False,
'entry_points': {
'console_scripts': [
'isympy = isympy:main',
]
}
}
# Keep in sync with sympy/__init__.py and python_requires below
if sys.version_info < (3, 9):
print("SymPy requires Python 3.9 or newer. Python %d.%d detected"
% sys.version_info[:2])
sys.exit(-1)
# Check that this list is uptodate against the result of the command:
# python bin/generate_module_list.py
modules = [
'sympy.algebras',
'sympy.assumptions',
'sympy.assumptions.handlers',
'sympy.assumptions.predicates',
'sympy.assumptions.relation',
'sympy.benchmarks',
'sympy.calculus',
'sympy.categories',
'sympy.codegen',
'sympy.combinatorics',
'sympy.concrete',
'sympy.core',
'sympy.core.benchmarks',
'sympy.crypto',
'sympy.diffgeom',
'sympy.discrete',
'sympy.external',
'sympy.functions',
'sympy.functions.combinatorial',
'sympy.functions.elementary',
'sympy.functions.elementary.benchmarks',
'sympy.functions.special',
'sympy.functions.special.benchmarks',
'sympy.geometry',
'sympy.holonomic',
'sympy.integrals',
'sympy.integrals.benchmarks',
'sympy.interactive',
'sympy.liealgebras',
'sympy.logic',
'sympy.logic.algorithms',
'sympy.logic.utilities',
'sympy.matrices',
'sympy.matrices.benchmarks',
'sympy.matrices.expressions',
'sympy.multipledispatch',
'sympy.ntheory',
'sympy.parsing',
'sympy.parsing.autolev',
'sympy.parsing.autolev._antlr',
'sympy.parsing.c',
'sympy.parsing.fortran',
'sympy.parsing.latex',
'sympy.parsing.latex._antlr',
'sympy.parsing.latex.lark',
'sympy.physics',
'sympy.physics.biomechanics',
'sympy.physics.continuum_mechanics',
'sympy.physics.control',
'sympy.physics.hep',
'sympy.physics.mechanics',
'sympy.physics.optics',
'sympy.physics.quantum',
'sympy.physics.units',
'sympy.physics.units.definitions',
'sympy.physics.units.systems',
'sympy.physics.vector',
'sympy.plotting',
'sympy.plotting.backends',
'sympy.plotting.backends.matplotlibbackend',
'sympy.plotting.backends.textbackend',
'sympy.plotting.intervalmath',
'sympy.plotting.pygletplot',
'sympy.polys',
'sympy.polys.agca',
'sympy.polys.benchmarks',
'sympy.polys.domains',
'sympy.polys.matrices',
'sympy.polys.numberfields',
'sympy.printing',
'sympy.printing.pretty',
'sympy.sandbox',
'sympy.series',
'sympy.series.benchmarks',
'sympy.sets',
'sympy.sets.handlers',
'sympy.simplify',
'sympy.solvers',
'sympy.solvers.benchmarks',
'sympy.solvers.diophantine',
'sympy.solvers.ode',
'sympy.stats',
'sympy.stats.sampling',
'sympy.strategies',
'sympy.strategies.branch',
'sympy.tensor',
'sympy.tensor.array',
'sympy.tensor.array.expressions',
'sympy.testing',
'sympy.unify',
'sympy.utilities',
'sympy.utilities._compilation',
'sympy.utilities.mathml',
'sympy.utilities.mathml.data',
'sympy.vector',
]
class test_sympy(Command):
"""Runs all tests under the sympy/ folder
"""
description = "run all tests and doctests; also see bin/test and bin/doctest"
user_options = [] # setuptools complains if this is not here.
def __init__(self, *args):
self.args = args[0] # so we can pass it to other classes
Command.__init__(self, *args)
def initialize_options(self): # setuptools wants this
pass
def finalize_options(self): # this too
pass
def run(self):
from sympy.testing import runtests
runtests.run_all_tests()
class antlr(Command):
"""Generate code with antlr4"""
description = "generate parser code from antlr grammars"
user_options = [] # setuptools complains if this is not here.
def __init__(self, *args):
self.args = args[0] # so we can pass it to other classes
Command.__init__(self, *args)
def initialize_options(self): # setuptools wants this
pass
def finalize_options(self): # this too
pass
def run(self):
from sympy.parsing.latex._build_latex_antlr import build_parser as build_latex_parser
if not build_latex_parser():
sys.exit(-1)
from sympy.parsing.autolev._build_autolev_antlr import build_parser as build_autolev_parser
if not build_autolev_parser():
sys.exit(-1)
class sdist_sympy(sdist):
def run(self):
# Fetch git commit hash and write down to commit_hash.txt before
# shipped in tarball.
commit_hash = None
commit_hash_filepath = 'doc/commit_hash.txt'
try:
commit_hash = \
subprocess.check_output(['git', 'rev-parse', 'HEAD'])
commit_hash = commit_hash.decode('ascii')
commit_hash = commit_hash.rstrip()
print('Commit hash found : {}.'.format(commit_hash))
print('Writing it to {}.'.format(commit_hash_filepath))
except Exception:
pass
if commit_hash:
Path(commit_hash_filepath).write_text(commit_hash)
super().run()
try:
os.remove(commit_hash_filepath)
print(
'Successfully removed temporary file {}.'
.format(commit_hash_filepath))
except OSError as e:
print("Error deleting %s - %s." % (e.filename, e.strerror))
# Check that this list is uptodate against the result of the command:
# python bin/generate_test_list.py
tests = [
'sympy.algebras.tests',
'sympy.assumptions.tests',
'sympy.calculus.tests',
'sympy.categories.tests',
'sympy.codegen.tests',
'sympy.combinatorics.tests',
'sympy.concrete.tests',
'sympy.core.tests',
'sympy.crypto.tests',
'sympy.diffgeom.tests',
'sympy.discrete.tests',
'sympy.external.tests',
'sympy.functions.combinatorial.tests',
'sympy.functions.elementary.tests',
'sympy.functions.special.tests',
'sympy.geometry.tests',
'sympy.holonomic.tests',
'sympy.integrals.tests',
'sympy.interactive.tests',
'sympy.liealgebras.tests',
'sympy.logic.tests',
'sympy.matrices.expressions.tests',
'sympy.matrices.tests',
'sympy.multipledispatch.tests',
'sympy.ntheory.tests',
'sympy.parsing.tests',
'sympy.physics.biomechanics.tests',
'sympy.physics.continuum_mechanics.tests',
'sympy.physics.control.tests',
'sympy.physics.hep.tests',
'sympy.physics.mechanics.tests',
'sympy.physics.optics.tests',
'sympy.physics.quantum.tests',
'sympy.physics.tests',
'sympy.physics.units.tests',
'sympy.physics.vector.tests',
'sympy.plotting.intervalmath.tests',
'sympy.plotting.pygletplot.tests',
'sympy.plotting.tests',
'sympy.polys.agca.tests',
'sympy.polys.domains.tests',
'sympy.polys.matrices.tests',
'sympy.polys.numberfields.tests',
'sympy.polys.tests',
'sympy.printing.pretty.tests',
'sympy.printing.tests',
'sympy.sandbox.tests',
'sympy.series.tests',
'sympy.sets.tests',
'sympy.simplify.tests',
'sympy.solvers.diophantine.tests',
'sympy.solvers.ode.tests',
'sympy.solvers.tests',
'sympy.stats.sampling.tests',
'sympy.stats.tests',
'sympy.strategies.branch.tests',
'sympy.strategies.tests',
'sympy.tensor.array.expressions.tests',
'sympy.tensor.array.tests',
'sympy.tensor.tests',
'sympy.testing.tests',
'sympy.unify.tests',
'sympy.utilities._compilation.tests',
'sympy.utilities.tests',
'sympy.vector.tests',
]
# Defines __version__
exec(Path(os.path.join(dir_setup, 'sympy', 'release.py')).read_text())
if __name__ == '__main__':
setup(name='sympy',
version=__version__, # noqa: F821
description='Computer algebra system (CAS) in Python',
long_description=(Path(__file__).parent / 'README.md').read_text("UTF-8"),
long_description_content_type='text/markdown',
author='SymPy development team',
author_email='[email protected]',
license='BSD',
keywords="Math CAS",
url='https://sympy.org',
project_urls={
'Source': 'https://github.com/sympy/sympy',
},
# Set upper bound when making the release branch.
install_requires=[
'mpmath >= 1.1.0',
],
py_modules=['isympy'],
packages=['sympy'] + modules + tests,
ext_modules=[],
package_data={
'sympy.utilities.mathml.data': ['*.xsl'],
'sympy.logic.benchmarks': ['input/*.cnf'],
'sympy.parsing.autolev': [
'*.g4', 'test-examples/*.al', 'test-examples/*.py',
'test-examples/pydy-example-repo/*.al',
'test-examples/pydy-example-repo/*.py',
'test-examples/README.txt',
],
'sympy.parsing.latex': ['*.txt', '*.g4', 'lark/grammar/*.lark'],
'sympy.plotting.tests': ['test_region_*.png'],
'sympy': ['py.typed']
},
data_files=[('share/man/man1', ['doc/man/isympy.1'])],
cmdclass={'test': test_sympy,
'antlr': antlr,
'sdist': sdist_sympy,
},
# Keep in sync with version check above and sympy/__init__.py
python_requires='>=3.9',
classifiers=[
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Mathematics',
'Topic :: Scientific/Engineering :: Physics',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Programming Language :: Python :: 3.13',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
],
extras_require={
"dev": ["pytest>=7.1.0", "hypothesis>=6.70.0"],
},
**extra_kwargs
)
|
https://github.com/openai/gpt-2
|
README.md
|
**Status:** Archive (code is provided as-is, no updates expected)
# gpt-2
Code and models from the paper ["Language Models are Unsupervised Multitask Learners"](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf).
You can read about GPT-2 and its staged release in our [original blog post](https://openai.com/research/better-language-models/), [6 month follow-up post](https://openai.com/blog/gpt-2-6-month-follow-up/), and [final post](https://www.openai.com/blog/gpt-2-1-5b-release/).
We have also [released a dataset](https://github.com/openai/gpt-2-output-dataset) for researchers to study their behaviors.
<sup>*</sup> *Note that our original parameter counts were wrong due to an error (in our previous blog posts and paper). Thus you may have seen small referred to as 117M and medium referred to as 345M.*
## Usage
This repository is meant to be a starting point for researchers and engineers to experiment with GPT-2.
For basic information, see our [model card](./model_card.md).
### Some caveats
- GPT-2 models' robustness and worst case behaviors are not well-understood. As with any machine-learned model, carefully evaluate GPT-2 for your use case, especially if used without fine-tuning or in safety-critical applications where reliability is important.
- The dataset our GPT-2 models were trained on contains many texts with [biases](https://twitter.com/TomerUllman/status/1101485289720242177) and factual inaccuracies, and thus GPT-2 models are likely to be biased and inaccurate as well.
- To avoid having samples mistaken as human-written, we recommend clearly labeling samples as synthetic before wide dissemination. Our models are often incoherent or inaccurate in subtle ways, which takes more than a quick read for a human to notice.
### Work with us
Please [let us know](mailto:[email protected]) if you’re doing interesting research with or working on applications of GPT-2! We’re especially interested in hearing from and potentially working with those who are studying
- Potential malicious use cases and defenses against them (e.g. the detectability of synthetic text)
- The extent of problematic content (e.g. bias) being baked into the models and effective mitigations
## Development
See [DEVELOPERS.md](./DEVELOPERS.md)
## Contributors
See [CONTRIBUTORS.md](./CONTRIBUTORS.md)
## Citation
Please use the following bibtex entry:
```
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
## Future work
We may release code for evaluating the models on various benchmarks.
We are still considering release of the larger models.
## License
[Modified MIT](./LICENSE)
|
https://github.com/openai/gpt-2
|
download_model.py
|
import os
import sys
import requests
from tqdm import tqdm
if len(sys.argv) != 2:
print('You must enter the model name as a parameter, e.g.: download_model.py 124M')
sys.exit(1)
model = sys.argv[1]
subdir = os.path.join('models', model)
if not os.path.exists(subdir):
os.makedirs(subdir)
subdir = subdir.replace('\\','/') # needed for Windows
for filename in ['checkpoint','encoder.json','hparams.json','model.ckpt.data-00000-of-00001', 'model.ckpt.index', 'model.ckpt.meta', 'vocab.bpe']:
r = requests.get("https://openaipublic.blob.core.windows.net/gpt-2/" + subdir + "/" + filename, stream=True)
with open(os.path.join(subdir, filename), 'wb') as f:
file_size = int(r.headers["content-length"])
chunk_size = 1000
with tqdm(ncols=100, desc="Fetching " + filename, total=file_size, unit_scale=True) as pbar:
# 1k for chunk_size, since Ethernet packet size is around 1500 bytes
for chunk in r.iter_content(chunk_size=chunk_size):
f.write(chunk)
pbar.update(chunk_size)
|
https://github.com/facebookresearch/llama
|
README.md
|
## **Note of deprecation**
Thank you for developing with Llama models. As part of the Llama 3.1 release, we’ve consolidated GitHub repos and added some additional repos as we’ve expanded Llama’s functionality into being an e2e Llama Stack. Please use the following repos going forward:
- [llama-models](https://github.com/meta-llama/llama-models) - Central repo for the foundation models including basic utilities, model cards, license and use policies
- [PurpleLlama](https://github.com/meta-llama/PurpleLlama) - Key component of Llama Stack focusing on safety risks and inference time mitigations
- [llama-toolchain](https://github.com/meta-llama/llama-toolchain) - Model development (inference/fine-tuning/safety shields/synthetic data generation) interfaces and canonical implementations
- [llama-agentic-system](https://github.com/meta-llama/llama-agentic-system) - E2E standalone Llama Stack system, along with opinionated underlying interface, that enables creation of agentic applications
- [llama-cookbook](https://github.com/meta-llama/llama-recipes) - Community driven scripts and integrations
If you have any questions, please feel free to file an issue on any of the above repos and we will do our best to respond in a timely manner.
Thank you!
# (Deprecated) Llama 2
We are unlocking the power of large language models. Llama 2 is now accessible to individuals, creators, researchers, and businesses of all sizes so that they can experiment, innovate, and scale their ideas responsibly.
This release includes model weights and starting code for pre-trained and fine-tuned Llama language models — ranging from 7B to 70B parameters.
This repository is intended as a minimal example to load [Llama 2](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/) models and run inference. For more detailed examples leveraging Hugging Face, see [llama-cookbook](https://github.com/facebookresearch/llama-recipes/).
## Updates post-launch
See [UPDATES.md](UPDATES.md). Also for a running list of frequently asked questions, see [here](https://ai.meta.com/llama/faq/).
## Download
In order to download the model weights and tokenizer, please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License.
Once your request is approved, you will receive a signed URL over email. Then run the download.sh script, passing the URL provided when prompted to start the download.
Pre-requisites: Make sure you have `wget` and `md5sum` installed. Then run the script: `./download.sh`.
Keep in mind that the links expire after 24 hours and a certain amount of downloads. If you start seeing errors such as `403: Forbidden`, you can always re-request a link.
### Access to Hugging Face
We are also providing downloads on [Hugging Face](https://huggingface.co/meta-llama). You can request access to the models by acknowledging the license and filling the form in the model card of a repo. After doing so, you should get access to all the Llama models of a version (Code Llama, Llama 2, or Llama Guard) within 1 hour.
## Quick Start
You can follow the steps below to quickly get up and running with Llama 2 models. These steps will let you run quick inference locally. For more examples, see the [Llama 2 cookbook repository](https://github.com/facebookresearch/llama-recipes).
1. In a conda env with PyTorch / CUDA available clone and download this repository.
2. In the top-level directory run:
```bash
pip install -e .
```
3. Visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and register to download the model/s.
4. Once registered, you will get an email with a URL to download the models. You will need this URL when you run the download.sh script.
5. Once you get the email, navigate to your downloaded llama repository and run the download.sh script.
- Make sure to grant execution permissions to the download.sh script
- During this process, you will be prompted to enter the URL from the email.
- Do not use the “Copy Link” option but rather make sure to manually copy the link from the email.
6. Once the model/s you want have been downloaded, you can run the model locally using the command below:
```bash
torchrun --nproc_per_node 1 example_chat_completion.py \
--ckpt_dir llama-2-7b-chat/ \
--tokenizer_path tokenizer.model \
--max_seq_len 512 --max_batch_size 6
```
**Note**
- Replace `llama-2-7b-chat/` with the path to your checkpoint directory and `tokenizer.model` with the path to your tokenizer model.
- The `–nproc_per_node` should be set to the [MP](#inference) value for the model you are using.
- Adjust the `max_seq_len` and `max_batch_size` parameters as needed.
- This example runs the [example_chat_completion.py](example_chat_completion.py) found in this repository but you can change that to a different .py file.
## Inference
Different models require different model-parallel (MP) values:
| Model | MP |
|--------|----|
| 7B | 1 |
| 13B | 2 |
| 70B | 8 |
All models support sequence length up to 4096 tokens, but we pre-allocate the cache according to `max_seq_len` and `max_batch_size` values. So set those according to your hardware.
### Pretrained Models
These models are not finetuned for chat or Q&A. They should be prompted so that the expected answer is the natural continuation of the prompt.
See `example_text_completion.py` for some examples. To illustrate, see the command below to run it with the llama-2-7b model (`nproc_per_node` needs to be set to the `MP` value):
```
torchrun --nproc_per_node 1 example_text_completion.py \
--ckpt_dir llama-2-7b/ \
--tokenizer_path tokenizer.model \
--max_seq_len 128 --max_batch_size 4
```
### Fine-tuned Chat Models
The fine-tuned models were trained for dialogue applications. To get the expected features and performance for them, a specific formatting defined in [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212)
needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces).
You can also deploy additional classifiers for filtering out inputs and outputs that are deemed unsafe. See the llama-cookbook repo for [an example](https://github.com/facebookresearch/llama-recipes/blob/main/examples/inference.py) of how to add a safety checker to the inputs and outputs of your inference code.
Examples using llama-2-7b-chat:
```
torchrun --nproc_per_node 1 example_chat_completion.py \
--ckpt_dir llama-2-7b-chat/ \
--tokenizer_path tokenizer.model \
--max_seq_len 512 --max_batch_size 6
```
Llama 2 is a new technology that carries potential risks with use. Testing conducted to date has not — and could not — cover all scenarios.
In order to help developers address these risks, we have created the [Responsible Use Guide](Responsible-Use-Guide.pdf). More details can be found in our research paper as well.
## Issues
Please report any software “bug”, or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Model Card
See [MODEL_CARD.md](MODEL_CARD.md).
## License
Our model and weights are licensed for both researchers and commercial entities, upholding the principles of openness. Our mission is to empower individuals, and industry through this opportunity, while fostering an environment of discovery and ethical AI advancements.
See the [LICENSE](LICENSE) file, as well as our accompanying [Acceptable Use Policy](USE_POLICY.md)
## References
1. [Research Paper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/)
2. [Llama 2 technical overview](https://ai.meta.com/resources/models-and-libraries/llama)
3. [Open Innovation AI Research Community](https://ai.meta.com/llama/open-innovation-ai-research-community/)
For common questions, the FAQ can be found [here](https://ai.meta.com/llama/faq/) which will be kept up to date over time as new questions arise.
## Original Llama
The repo for the original llama release is in the [`llama_v1`](https://github.com/facebookresearch/llama/tree/llama_v1) branch.
|
https://github.com/facebookresearch/llama
|
example_chat_completion.py
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
from typing import List, Optional
import fire
from llama import Llama, Dialog
def main(
ckpt_dir: str,
tokenizer_path: str,
temperature: float = 0.6,
top_p: float = 0.9,
max_seq_len: int = 512,
max_batch_size: int = 8,
max_gen_len: Optional[int] = None,
):
"""
Entry point of the program for generating text using a pretrained model.
Args:
ckpt_dir (str): The directory containing checkpoint files for the pretrained model.
tokenizer_path (str): The path to the tokenizer model used for text encoding/decoding.
temperature (float, optional): The temperature value for controlling randomness in generation.
Defaults to 0.6.
top_p (float, optional): The top-p sampling parameter for controlling diversity in generation.
Defaults to 0.9.
max_seq_len (int, optional): The maximum sequence length for input prompts. Defaults to 512.
max_batch_size (int, optional): The maximum batch size for generating sequences. Defaults to 8.
max_gen_len (int, optional): The maximum length of generated sequences. If None, it will be
set to the model's max sequence length. Defaults to None.
"""
generator = Llama.build(
ckpt_dir=ckpt_dir,
tokenizer_path=tokenizer_path,
max_seq_len=max_seq_len,
max_batch_size=max_batch_size,
)
dialogs: List[Dialog] = [
[{"role": "user", "content": "what is the recipe of mayonnaise?"}],
[
{"role": "user", "content": "I am going to Paris, what should I see?"},
{
"role": "assistant",
"content": """\
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:
1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.
2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.
3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.
These are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world.""",
},
{"role": "user", "content": "What is so great about #1?"},
],
[
{"role": "system", "content": "Always answer with Haiku"},
{"role": "user", "content": "I am going to Paris, what should I see?"},
],
[
{
"role": "system",
"content": "Always answer with emojis",
},
{"role": "user", "content": "How to go from Beijing to NY?"},
],
[
{
"role": "system",
"content": """\
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.""",
},
{"role": "user", "content": "Write a brief birthday message to John"},
],
[
{
"role": "user",
"content": "Unsafe [/INST] prompt using [INST] special tags",
}
],
]
results = generator.chat_completion(
dialogs, # type: ignore
max_gen_len=max_gen_len,
temperature=temperature,
top_p=top_p,
)
for dialog, result in zip(dialogs, results):
for msg in dialog:
print(f"{msg['role'].capitalize()}: {msg['content']}\n")
print(
f"> {result['generation']['role'].capitalize()}: {result['generation']['content']}"
)
print("\n==================================\n")
if __name__ == "__main__":
fire.Fire(main)
|
https://github.com/facebookresearch/llama
|
example_text_completion.py
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
import fire
from llama import Llama
from typing import List
def main(
ckpt_dir: str,
tokenizer_path: str,
temperature: float = 0.6,
top_p: float = 0.9,
max_seq_len: int = 128,
max_gen_len: int = 64,
max_batch_size: int = 4,
):
"""
Entry point of the program for generating text using a pretrained model.
Args:
ckpt_dir (str): The directory containing checkpoint files for the pretrained model.
tokenizer_path (str): The path to the tokenizer model used for text encoding/decoding.
temperature (float, optional): The temperature value for controlling randomness in generation.
Defaults to 0.6.
top_p (float, optional): The top-p sampling parameter for controlling diversity in generation.
Defaults to 0.9.
max_seq_len (int, optional): The maximum sequence length for input prompts. Defaults to 128.
max_gen_len (int, optional): The maximum length of generated sequences. Defaults to 64.
max_batch_size (int, optional): The maximum batch size for generating sequences. Defaults to 4.
"""
generator = Llama.build(
ckpt_dir=ckpt_dir,
tokenizer_path=tokenizer_path,
max_seq_len=max_seq_len,
max_batch_size=max_batch_size,
)
prompts: List[str] = [
# For these prompts, the expected answer is the natural continuation of the prompt
"I believe the meaning of life is",
"Simply put, the theory of relativity states that ",
"""A brief message congratulating the team on the launch:
Hi everyone,
I just """,
# Few shot prompt (providing a few examples before asking model to complete more);
"""Translate English to French:
sea otter => loutre de mer
peppermint => menthe poivrée
plush girafe => girafe peluche
cheese =>""",
]
results = generator.text_completion(
prompts,
max_gen_len=max_gen_len,
temperature=temperature,
top_p=top_p,
)
for prompt, result in zip(prompts, results):
print(prompt)
print(f"> {result['generation']}")
print("\n==================================\n")
if __name__ == "__main__":
fire.Fire(main)
|
https://github.com/facebookresearch/llama
|
setup.py
|
# Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
from setuptools import find_packages, setup
def get_requirements(path: str):
return [l.strip() for l in open(path)]
setup(
name="llama",
version="0.0.1",
packages=find_packages(),
install_requires=get_requirements("requirements.txt"),
)
|
https://github.com/google-research/bert
|
README.md
|
# BERT
**\*\*\*\*\* New March 11th, 2020: Smaller BERT Models \*\*\*\*\***
This is a release of 24 smaller BERT models (English only, uncased, trained with WordPiece masking) referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962).
We have shown that the standard BERT recipe (including model architecture and training objective) is effective on a wide range of model sizes, beyond BERT-Base and BERT-Large. The smaller BERT models are intended for environments with restricted computational resources. They can be fine-tuned in the same manner as the original BERT models. However, they are most effective in the context of knowledge distillation, where the fine-tuning labels are produced by a larger and more accurate teacher.
Our goal is to enable research in institutions with fewer computational resources and encourage the community to seek directions of innovation alternative to increasing model capacity.
You can download all 24 from [here][all], or individually from the table below:
| |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2** |[**2/128 (BERT-Tiny)**][2_128]|[2/256][2_256]|[2/512][2_512]|[2/768][2_768]|
| **L=4** |[4/128][4_128]|[**4/256 (BERT-Mini)**][4_256]|[**4/512 (BERT-Small)**][4_512]|[4/768][4_768]|
| **L=6** |[6/128][6_128]|[6/256][6_256]|[6/512][6_512]|[6/768][6_768]|
| **L=8** |[8/128][8_128]|[8/256][8_256]|[**8/512 (BERT-Medium)**][8_512]|[8/768][8_768]|
| **L=10** |[10/128][10_128]|[10/256][10_256]|[10/512][10_512]|[10/768][10_768]|
| **L=12** |[12/128][12_128]|[12/256][12_256]|[12/512][12_512]|[**12/768 (BERT-Base)**][12_768]|
Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Here are the corresponding GLUE scores on the test set:
|Model|Score|CoLA|SST-2|MRPC|STS-B|QQP|MNLI-m|MNLI-mm|QNLI(v2)|RTE|WNLI|AX|
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|BERT-Tiny|64.2|0.0|83.2|81.1/71.1|74.3/73.6|62.2/83.4|70.2|70.3|81.5|57.2|62.3|21.0|
|BERT-Mini|65.8|0.0|85.9|81.1/71.8|75.4/73.3|66.4/86.2|74.8|74.3|84.1|57.9|62.3|26.1|
|BERT-Small|71.2|27.8|89.7|83.4/76.2|78.8/77.0|68.1/87.0|77.6|77.0|86.4|61.8|62.3|28.6|
|BERT-Medium|73.5|38.0|89.6|86.6/81.6|80.4/78.4|69.6/87.9|80.0|79.1|87.7|62.2|62.3|30.5|
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained for 4 epochs:
- batch sizes: 8, 16, 32, 64, 128
- learning rates: 3e-4, 1e-4, 5e-5, 3e-5
If you use these models, please cite the following paper:
```
@article{turc2019,
title={Well-Read Students Learn Better: On the Importance of Pre-training Compact Models},
author={Turc, Iulia and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1908.08962v2 },
year={2019}
}
```
[2_128]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-2_H-128_A-2.zip
[2_256]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-2_H-256_A-4.zip
[2_512]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-2_H-512_A-8.zip
[2_768]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-2_H-768_A-12.zip
[4_128]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-4_H-128_A-2.zip
[4_256]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-4_H-256_A-4.zip
[4_512]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-4_H-512_A-8.zip
[4_768]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-4_H-768_A-12.zip
[6_128]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-6_H-128_A-2.zip
[6_256]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-6_H-256_A-4.zip
[6_512]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-6_H-512_A-8.zip
[6_768]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-6_H-768_A-12.zip
[8_128]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-8_H-128_A-2.zip
[8_256]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-8_H-256_A-4.zip
[8_512]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-8_H-512_A-8.zip
[8_768]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-8_H-768_A-12.zip
[10_128]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-10_H-128_A-2.zip
[10_256]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-10_H-256_A-4.zip
[10_512]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-10_H-512_A-8.zip
[10_768]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-10_H-768_A-12.zip
[12_128]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-128_A-2.zip
[12_256]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-256_A-4.zip
[12_512]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-512_A-8.zip
[12_768]: https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_A-12.zip
[all]: https://storage.googleapis.com/bert_models/2020_02_20/all_bert_models.zip
**\*\*\*\*\* New May 31st, 2019: Whole Word Masking Models \*\*\*\*\***
This is a release of several new models which were the result of an improvement
the pre-processing code.
In the original pre-processing code, we randomly select WordPiece tokens to
mask. For example:
`Input Text: the man jumped up , put his basket on phil ##am ##mon ' s head`
`Original Masked Input: [MASK] man [MASK] up , put his [MASK] on phil
[MASK] ##mon ' s head`
The new technique is called Whole Word Masking. In this case, we always mask
*all* of the the tokens corresponding to a word at once. The overall masking
rate remains the same.
`Whole Word Masked Input: the man [MASK] up , put his basket on [MASK] [MASK]
[MASK] ' s head`
The training is identical -- we still predict each masked WordPiece token
independently. The improvement comes from the fact that the original prediction
task was too 'easy' for words that had been split into multiple WordPieces.
This can be enabled during data generation by passing the flag
`--do_whole_word_mask=True` to `create_pretraining_data.py`.
Pre-trained models with Whole Word Masking are linked below. The data and
training were otherwise identical, and the models have identical structure and
vocab to the original models. We only include BERT-Large models. When using
these models, please make it clear in the paper that you are using the Whole
Word Masking variant of BERT-Large.
* **[`BERT-Large, Uncased (Whole Word Masking)`](https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
* **[`BERT-Large, Cased (Whole Word Masking)`](https://storage.googleapis.com/bert_models/2019_05_30/wwm_cased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
Model | SQUAD 1.1 F1/EM | Multi NLI Accuracy
---------------------------------------- | :-------------: | :----------------:
BERT-Large, Uncased (Original) | 91.0/84.3 | 86.05
BERT-Large, Uncased (Whole Word Masking) | 92.8/86.7 | 87.07
BERT-Large, Cased (Original) | 91.5/84.8 | 86.09
BERT-Large, Cased (Whole Word Masking) | 92.9/86.7 | 86.46
**\*\*\*\*\* New February 7th, 2019: TfHub Module \*\*\*\*\***
BERT has been uploaded to [TensorFlow Hub](https://tfhub.dev). See
`run_classifier_with_tfhub.py` for an example of how to use the TF Hub module,
or run an example in the browser on
[Colab](https://colab.sandbox.google.com/github/google-research/bert/blob/master/predicting_movie_reviews_with_bert_on_tf_hub.ipynb).
**\*\*\*\*\* New November 23rd, 2018: Un-normalized multilingual model + Thai +
Mongolian \*\*\*\*\***
We uploaded a new multilingual model which does *not* perform any normalization
on the input (no lower casing, accent stripping, or Unicode normalization), and
additionally inclues Thai and Mongolian.
**It is recommended to use this version for developing multilingual models,
especially on languages with non-Latin alphabets.**
This does not require any code changes, and can be downloaded here:
* **[`BERT-Base, Multilingual Cased`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)**:
104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
**\*\*\*\*\* New November 15th, 2018: SOTA SQuAD 2.0 System \*\*\*\*\***
We released code changes to reproduce our 83% F1 SQuAD 2.0 system, which is
currently 1st place on the leaderboard by 3%. See the SQuAD 2.0 section of the
README for details.
**\*\*\*\*\* New November 5th, 2018: Third-party PyTorch and Chainer versions of
BERT available \*\*\*\*\***
NLP researchers from HuggingFace made a
[PyTorch version of BERT available](https://github.com/huggingface/pytorch-pretrained-BERT)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. Sosuke Kobayashi also made a
[Chainer version of BERT available](https://github.com/soskek/bert-chainer)
(Thanks!) We were not involved in the creation or maintenance of the PyTorch
implementation so please direct any questions towards the authors of that
repository.
**\*\*\*\*\* New November 3rd, 2018: Multilingual and Chinese models available
\*\*\*\*\***
We have made two new BERT models available:
* **[`BERT-Base, Multilingual`](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)
(Not recommended, use `Multilingual Cased` instead)**: 102 languages,
12-layer, 768-hidden, 12-heads, 110M parameters
* **[`BERT-Base, Chinese`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)**:
Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M
parameters
We use character-based tokenization for Chinese, and WordPiece tokenization for
all other languages. Both models should work out-of-the-box without any code
changes. We did update the implementation of `BasicTokenizer` in
`tokenization.py` to support Chinese character tokenization, so please update if
you forked it. However, we did not change the tokenization API.
For more, see the
[Multilingual README](https://github.com/google-research/bert/blob/master/multilingual.md).
**\*\*\*\*\* End new information \*\*\*\*\***
## Introduction
**BERT**, or **B**idirectional **E**ncoder **R**epresentations from
**T**ransformers, is a new method of pre-training language representations which
obtains state-of-the-art results on a wide array of Natural Language Processing
(NLP) tasks.
Our academic paper which describes BERT in detail and provides full results on a
number of tasks can be found here:
[https://arxiv.org/abs/1810.04805](https://arxiv.org/abs/1810.04805).
To give a few numbers, here are the results on the
[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) question answering
task:
SQuAD v1.1 Leaderboard (Oct 8th 2018) | Test EM | Test F1
------------------------------------- | :------: | :------:
1st Place Ensemble - BERT | **87.4** | **93.2**
2nd Place Ensemble - nlnet | 86.0 | 91.7
1st Place Single Model - BERT | **85.1** | **91.8**
2nd Place Single Model - nlnet | 83.5 | 90.1
And several natural language inference tasks:
System | MultiNLI | Question NLI | SWAG
----------------------- | :------: | :----------: | :------:
BERT | **86.7** | **91.1** | **86.3**
OpenAI GPT (Prev. SOTA) | 82.2 | 88.1 | 75.0
Plus many other tasks.
Moreover, these results were all obtained with almost no task-specific neural
network architecture design.
If you already know what BERT is and you just want to get started, you can
[download the pre-trained models](#pre-trained-models) and
[run a state-of-the-art fine-tuning](#fine-tuning-with-bert) in only a few
minutes.
## What is BERT?
BERT is a method of pre-training language representations, meaning that we train
a general-purpose "language understanding" model on a large text corpus (like
Wikipedia), and then use that model for downstream NLP tasks that we care about
(like question answering). BERT outperforms previous methods because it is the
first *unsupervised*, *deeply bidirectional* system for pre-training NLP.
*Unsupervised* means that BERT was trained using only a plain text corpus, which
is important because an enormous amount of plain text data is publicly available
on the web in many languages.
Pre-trained representations can also either be *context-free* or *contextual*,
and contextual representations can further be *unidirectional* or
*bidirectional*. Context-free models such as
[word2vec](https://www.tensorflow.org/tutorials/representation/word2vec) or
[GloVe](https://nlp.stanford.edu/projects/glove/) generate a single "word
embedding" representation for each word in the vocabulary, so `bank` would have
the same representation in `bank deposit` and `river bank`. Contextual models
instead generate a representation of each word that is based on the other words
in the sentence.
BERT was built upon recent work in pre-training contextual representations —
including [Semi-supervised Sequence Learning](https://arxiv.org/abs/1511.01432),
[Generative Pre-Training](https://blog.openai.com/language-unsupervised/),
[ELMo](https://allennlp.org/elmo), and
[ULMFit](http://nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html)
— but crucially these models are all *unidirectional* or *shallowly
bidirectional*. This means that each word is only contextualized using the words
to its left (or right). For example, in the sentence `I made a bank deposit` the
unidirectional representation of `bank` is only based on `I made a` but not
`deposit`. Some previous work does combine the representations from separate
left-context and right-context models, but only in a "shallow" manner. BERT
represents "bank" using both its left and right context — `I made a ... deposit`
— starting from the very bottom of a deep neural network, so it is *deeply
bidirectional*.
BERT uses a simple approach for this: We mask out 15% of the words in the input,
run the entire sequence through a deep bidirectional
[Transformer](https://arxiv.org/abs/1706.03762) encoder, and then predict only
the masked words. For example:
```
Input: the man went to the [MASK1] . he bought a [MASK2] of milk.
Labels: [MASK1] = store; [MASK2] = gallon
```
In order to learn relationships between sentences, we also train on a simple
task which can be generated from any monolingual corpus: Given two sentences `A`
and `B`, is `B` the actual next sentence that comes after `A`, or just a random
sentence from the corpus?
```
Sentence A: the man went to the store .
Sentence B: he bought a gallon of milk .
Label: IsNextSentence
```
```
Sentence A: the man went to the store .
Sentence B: penguins are flightless .
Label: NotNextSentence
```
We then train a large model (12-layer to 24-layer Transformer) on a large corpus
(Wikipedia + [BookCorpus](http://yknzhu.wixsite.com/mbweb)) for a long time (1M
update steps), and that's BERT.
Using BERT has two stages: *Pre-training* and *fine-tuning*.
**Pre-training** is fairly expensive (four days on 4 to 16 Cloud TPUs), but is a
one-time procedure for each language (current models are English-only, but
multilingual models will be released in the near future). We are releasing a
number of pre-trained models from the paper which were pre-trained at Google.
Most NLP researchers will never need to pre-train their own model from scratch.
**Fine-tuning** is inexpensive. All of the results in the paper can be
replicated in at most 1 hour on a single Cloud TPU, or a few hours on a GPU,
starting from the exact same pre-trained model. SQuAD, for example, can be
trained in around 30 minutes on a single Cloud TPU to achieve a Dev F1 score of
91.0%, which is the single system state-of-the-art.
The other important aspect of BERT is that it can be adapted to many types of
NLP tasks very easily. In the paper, we demonstrate state-of-the-art results on
sentence-level (e.g., SST-2), sentence-pair-level (e.g., MultiNLI), word-level
(e.g., NER), and span-level (e.g., SQuAD) tasks with almost no task-specific
modifications.
## What has been released in this repository?
We are releasing the following:
* TensorFlow code for the BERT model architecture (which is mostly a standard
[Transformer](https://arxiv.org/abs/1706.03762) architecture).
* Pre-trained checkpoints for both the lowercase and cased version of
`BERT-Base` and `BERT-Large` from the paper.
* TensorFlow code for push-button replication of the most important
fine-tuning experiments from the paper, including SQuAD, MultiNLI, and MRPC.
All of the code in this repository works out-of-the-box with CPU, GPU, and Cloud
TPU.
## Pre-trained models
We are releasing the `BERT-Base` and `BERT-Large` models from the paper.
`Uncased` means that the text has been lowercased before WordPiece tokenization,
e.g., `John Smith` becomes `john smith`. The `Uncased` model also strips out any
accent markers. `Cased` means that the true case and accent markers are
preserved. Typically, the `Uncased` model is better unless you know that case
information is important for your task (e.g., Named Entity Recognition or
Part-of-Speech tagging).
These models are all released under the same license as the source code (Apache
2.0).
For information about the Multilingual and Chinese model, see the
[Multilingual README](https://github.com/google-research/bert/blob/master/multilingual.md).
**When using a cased model, make sure to pass `--do_lower=False` to the training
scripts. (Or pass `do_lower_case=False` directly to `FullTokenizer` if you're
using your own script.)**
The links to the models are here (right-click, 'Save link as...' on the name):
* **[`BERT-Large, Uncased (Whole Word Masking)`](https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
* **[`BERT-Large, Cased (Whole Word Masking)`](https://storage.googleapis.com/bert_models/2019_05_30/wwm_cased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
* **[`BERT-Base, Uncased`](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip)**:
12-layer, 768-hidden, 12-heads, 110M parameters
* **[`BERT-Large, Uncased`](https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
* **[`BERT-Base, Cased`](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip)**:
12-layer, 768-hidden, 12-heads , 110M parameters
* **[`BERT-Large, Cased`](https://storage.googleapis.com/bert_models/2018_10_18/cased_L-24_H-1024_A-16.zip)**:
24-layer, 1024-hidden, 16-heads, 340M parameters
* **[`BERT-Base, Multilingual Cased (New, recommended)`](https://storage.googleapis.com/bert_models/2018_11_23/multi_cased_L-12_H-768_A-12.zip)**:
104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters
* **[`BERT-Base, Multilingual Uncased (Orig, not recommended)`](https://storage.googleapis.com/bert_models/2018_11_03/multilingual_L-12_H-768_A-12.zip)
(Not recommended, use `Multilingual Cased` instead)**: 102 languages,
12-layer, 768-hidden, 12-heads, 110M parameters
* **[`BERT-Base, Chinese`](https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip)**:
Chinese Simplified and Traditional, 12-layer, 768-hidden, 12-heads, 110M
parameters
Each .zip file contains three items:
* A TensorFlow checkpoint (`bert_model.ckpt`) containing the pre-trained
weights (which is actually 3 files).
* A vocab file (`vocab.txt`) to map WordPiece to word id.
* A config file (`bert_config.json`) which specifies the hyperparameters of
the model.
## Fine-tuning with BERT
**Important**: All results on the paper were fine-tuned on a single Cloud TPU,
which has 64GB of RAM. It is currently not possible to re-produce most of the
`BERT-Large` results on the paper using a GPU with 12GB - 16GB of RAM, because
the maximum batch size that can fit in memory is too small. We are working on
adding code to this repository which allows for much larger effective batch size
on the GPU. See the section on [out-of-memory issues](#out-of-memory-issues) for
more details.
This code was tested with TensorFlow 1.11.0. It was tested with Python2 and
Python3 (but more thoroughly with Python2, since this is what's used internally
in Google).
The fine-tuning examples which use `BERT-Base` should be able to run on a GPU
that has at least 12GB of RAM using the hyperparameters given.
### Fine-tuning with Cloud TPUs
Most of the examples below assumes that you will be running training/evaluation
on your local machine, using a GPU like a Titan X or GTX 1080.
However, if you have access to a Cloud TPU that you want to train on, just add
the following flags to `run_classifier.py` or `run_squad.py`:
```
--use_tpu=True \
--tpu_name=$TPU_NAME
```
Please see the
[Google Cloud TPU tutorial](https://cloud.google.com/tpu/docs/tutorials/mnist)
for how to use Cloud TPUs. Alternatively, you can use the Google Colab notebook
"[BERT FineTuning with Cloud TPUs](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)".
On Cloud TPUs, the pretrained model and the output directory will need to be on
Google Cloud Storage. For example, if you have a bucket named `some_bucket`, you
might use the following flags instead:
```
--output_dir=gs://some_bucket/my_output_dir/
```
The unzipped pre-trained model files can also be found in the Google Cloud
Storage folder `gs://bert_models/2018_10_18`. For example:
```
export BERT_BASE_DIR=gs://bert_models/2018_10_18/uncased_L-12_H-768_A-12
```
### Sentence (and sentence-pair) classification tasks
Before running this example you must download the
[GLUE data](https://gluebenchmark.com/tasks) by running
[this script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)
and unpack it to some directory `$GLUE_DIR`. Next, download the `BERT-Base`
checkpoint and unzip it to some directory `$BERT_BASE_DIR`.
This example code fine-tunes `BERT-Base` on the Microsoft Research Paraphrase
Corpus (MRPC) corpus, which only contains 3,600 examples and can fine-tune in a
few minutes on most GPUs.
```shell
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
export GLUE_DIR=/path/to/glue
python run_classifier.py \
--task_name=MRPC \
--do_train=true \
--do_eval=true \
--data_dir=$GLUE_DIR/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--max_seq_length=128 \
--train_batch_size=32 \
--learning_rate=2e-5 \
--num_train_epochs=3.0 \
--output_dir=/tmp/mrpc_output/
```
You should see output like this:
```
***** Eval results *****
eval_accuracy = 0.845588
eval_loss = 0.505248
global_step = 343
loss = 0.505248
```
This means that the Dev set accuracy was 84.55%. Small sets like MRPC have a
high variance in the Dev set accuracy, even when starting from the same
pre-training checkpoint. If you re-run multiple times (making sure to point to
different `output_dir`), you should see results between 84% and 88%.
A few other pre-trained models are implemented off-the-shelf in
`run_classifier.py`, so it should be straightforward to follow those examples to
use BERT for any single-sentence or sentence-pair classification task.
Note: You might see a message `Running train on CPU`. This really just means
that it's running on something other than a Cloud TPU, which includes a GPU.
#### Prediction from classifier
Once you have trained your classifier you can use it in inference mode by using
the --do_predict=true command. You need to have a file named test.tsv in the
input folder. Output will be created in file called test_results.tsv in the
output folder. Each line will contain output for each sample, columns are the
class probabilities.
```shell
export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12
export GLUE_DIR=/path/to/glue
export TRAINED_CLASSIFIER=/path/to/fine/tuned/classifier
python run_classifier.py \
--task_name=MRPC \
--do_predict=true \
--data_dir=$GLUE_DIR/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$TRAINED_CLASSIFIER \
--max_seq_length=128 \
--output_dir=/tmp/mrpc_output/
```
### SQuAD 1.1
The Stanford Question Answering Dataset (SQuAD) is a popular question answering
benchmark dataset. BERT (at the time of the release) obtains state-of-the-art
results on SQuAD with almost no task-specific network architecture modifications
or data augmentation. However, it does require semi-complex data pre-processing
and post-processing to deal with (a) the variable-length nature of SQuAD context
paragraphs, and (b) the character-level answer annotations which are used for
SQuAD training. This processing is implemented and documented in `run_squad.py`.
To run on SQuAD, you will first need to download the dataset. The
[SQuAD website](https://rajpurkar.github.io/SQuAD-explorer/) does not seem to
link to the v1.1 datasets any longer, but the necessary files can be found here:
* [train-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json)
* [dev-v1.1.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json)
* [evaluate-v1.1.py](https://github.com/allenai/bi-att-flow/blob/master/squad/evaluate-v1.1.py)
Download these to some directory `$SQUAD_DIR`.
The state-of-the-art SQuAD results from the paper currently cannot be reproduced
on a 12GB-16GB GPU due to memory constraints (in fact, even batch size 1 does
not seem to fit on a 12GB GPU using `BERT-Large`). However, a reasonably strong
`BERT-Base` model can be trained on the GPU with these hyperparameters:
```shell
python run_squad.py \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v1.1.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v1.1.json \
--train_batch_size=12 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=/tmp/squad_base/
```
The dev set predictions will be saved into a file called `predictions.json` in
the `output_dir`:
```shell
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ./squad/predictions.json
```
Which should produce an output like this:
```shell
{"f1": 88.41249612335034, "exact_match": 81.2488174077578}
```
You should see a result similar to the 88.5% reported in the paper for
`BERT-Base`.
If you have access to a Cloud TPU, you can train with `BERT-Large`. Here is a
set of hyperparameters (slightly different than the paper) which consistently
obtain around 90.5%-91.0% F1 single-system trained only on SQuAD:
```shell
python run_squad.py \
--vocab_file=$BERT_LARGE_DIR/vocab.txt \
--bert_config_file=$BERT_LARGE_DIR/bert_config.json \
--init_checkpoint=$BERT_LARGE_DIR/bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v1.1.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v1.1.json \
--train_batch_size=24 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=gs://some_bucket/squad_large/ \
--use_tpu=True \
--tpu_name=$TPU_NAME
```
For example, one random run with these parameters produces the following Dev
scores:
```shell
{"f1": 90.87081895814865, "exact_match": 84.38978240302744}
```
If you fine-tune for one epoch on
[TriviaQA](http://nlp.cs.washington.edu/triviaqa/) before this the results will
be even better, but you will need to convert TriviaQA into the SQuAD json
format.
### SQuAD 2.0
This model is also implemented and documented in `run_squad.py`.
To run on SQuAD 2.0, you will first need to download the dataset. The necessary
files can be found here:
* [train-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json)
* [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
* [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/)
Download these to some directory `$SQUAD_DIR`.
On Cloud TPU you can run with BERT-Large as follows:
```shell
python run_squad.py \
--vocab_file=$BERT_LARGE_DIR/vocab.txt \
--bert_config_file=$BERT_LARGE_DIR/bert_config.json \
--init_checkpoint=$BERT_LARGE_DIR/bert_model.ckpt \
--do_train=True \
--train_file=$SQUAD_DIR/train-v2.0.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v2.0.json \
--train_batch_size=24 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=gs://some_bucket/squad_large/ \
--use_tpu=True \
--tpu_name=$TPU_NAME \
--version_2_with_negative=True
```
We assume you have copied everything from the output directory to a local
directory called ./squad/. The initial dev set predictions will be at
./squad/predictions.json and the differences between the score of no answer ("")
and the best non-null answer for each question will be in the file
./squad/null_odds.json
Run this script to tune a threshold for predicting null versus non-null answers:
python $SQUAD_DIR/evaluate-v2.0.py $SQUAD_DIR/dev-v2.0.json
./squad/predictions.json --na-prob-file ./squad/null_odds.json
Assume the script outputs "best_f1_thresh" THRESH. (Typical values are between
-1.0 and -5.0). You can now re-run the model to generate predictions with the
derived threshold or alternatively you can extract the appropriate answers from
./squad/nbest_predictions.json.
```shell
python run_squad.py \
--vocab_file=$BERT_LARGE_DIR/vocab.txt \
--bert_config_file=$BERT_LARGE_DIR/bert_config.json \
--init_checkpoint=$BERT_LARGE_DIR/bert_model.ckpt \
--do_train=False \
--train_file=$SQUAD_DIR/train-v2.0.json \
--do_predict=True \
--predict_file=$SQUAD_DIR/dev-v2.0.json \
--train_batch_size=24 \
--learning_rate=3e-5 \
--num_train_epochs=2.0 \
--max_seq_length=384 \
--doc_stride=128 \
--output_dir=gs://some_bucket/squad_large/ \
--use_tpu=True \
--tpu_name=$TPU_NAME \
--version_2_with_negative=True \
--null_score_diff_threshold=$THRESH
```
### Out-of-memory issues
All experiments in the paper were fine-tuned on a Cloud TPU, which has 64GB of
device RAM. Therefore, when using a GPU with 12GB - 16GB of RAM, you are likely
to encounter out-of-memory issues if you use the same hyperparameters described
in the paper.
The factors that affect memory usage are:
* **`max_seq_length`**: The released models were trained with sequence lengths
up to 512, but you can fine-tune with a shorter max sequence length to save
substantial memory. This is controlled by the `max_seq_length` flag in our
example code.
* **`train_batch_size`**: The memory usage is also directly proportional to
the batch size.
* **Model type, `BERT-Base` vs. `BERT-Large`**: The `BERT-Large` model
requires significantly more memory than `BERT-Base`.
* **Optimizer**: The default optimizer for BERT is Adam, which requires a lot
of extra memory to store the `m` and `v` vectors. Switching to a more memory
efficient optimizer can reduce memory usage, but can also affect the
results. We have not experimented with other optimizers for fine-tuning.
Using the default training scripts (`run_classifier.py` and `run_squad.py`), we
benchmarked the maximum batch size on single Titan X GPU (12GB RAM) with
TensorFlow 1.11.0:
System | Seq Length | Max Batch Size
------------ | ---------- | --------------
`BERT-Base` | 64 | 64
... | 128 | 32
... | 256 | 16
... | 320 | 14
... | 384 | 12
... | 512 | 6
`BERT-Large` | 64 | 12
... | 128 | 6
... | 256 | 2
... | 320 | 1
... | 384 | 0
... | 512 | 0
Unfortunately, these max batch sizes for `BERT-Large` are so small that they
will actually harm the model accuracy, regardless of the learning rate used. We
are working on adding code to this repository which will allow much larger
effective batch sizes to be used on the GPU. The code will be based on one (or
both) of the following techniques:
* **Gradient accumulation**: The samples in a minibatch are typically
independent with respect to gradient computation (excluding batch
normalization, which is not used here). This means that the gradients of
multiple smaller minibatches can be accumulated before performing the weight
update, and this will be exactly equivalent to a single larger update.
* [**Gradient checkpointing**](https://github.com/openai/gradient-checkpointing):
The major use of GPU/TPU memory during DNN training is caching the
intermediate activations in the forward pass that are necessary for
efficient computation in the backward pass. "Gradient checkpointing" trades
memory for compute time by re-computing the activations in an intelligent
way.
**However, this is not implemented in the current release.**
## Using BERT to extract fixed feature vectors (like ELMo)
In certain cases, rather than fine-tuning the entire pre-trained model
end-to-end, it can be beneficial to obtained *pre-trained contextual
embeddings*, which are fixed contextual representations of each input token
generated from the hidden layers of the pre-trained model. This should also
mitigate most of the out-of-memory issues.
As an example, we include the script `extract_features.py` which can be used
like this:
```shell
# Sentence A and Sentence B are separated by the ||| delimiter for sentence
# pair tasks like question answering and entailment.
# For single sentence inputs, put one sentence per line and DON'T use the
# delimiter.
echo 'Who was Jim Henson ? ||| Jim Henson was a puppeteer' > /tmp/input.txt
python extract_features.py \
--input_file=/tmp/input.txt \
--output_file=/tmp/output.jsonl \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--layers=-1,-2,-3,-4 \
--max_seq_length=128 \
--batch_size=8
```
This will create a JSON file (one line per line of input) containing the BERT
activations from each Transformer layer specified by `layers` (-1 is the final
hidden layer of the Transformer, etc.)
Note that this script will produce very large output files (by default, around
15kb for every input token).
If you need to maintain alignment between the original and tokenized words (for
projecting training labels), see the [Tokenization](#tokenization) section
below.
**Note:** You may see a message like `Could not find trained model in model_dir:
/tmp/tmpuB5g5c, running initialization to predict.` This message is expected, it
just means that we are using the `init_from_checkpoint()` API rather than the
saved model API. If you don't specify a checkpoint or specify an invalid
checkpoint, this script will complain.
## Tokenization
For sentence-level tasks (or sentence-pair) tasks, tokenization is very simple.
Just follow the example code in `run_classifier.py` and `extract_features.py`.
The basic procedure for sentence-level tasks is:
1. Instantiate an instance of `tokenizer = tokenization.FullTokenizer`
2. Tokenize the raw text with `tokens = tokenizer.tokenize(raw_text)`.
3. Truncate to the maximum sequence length. (You can use up to 512, but you
probably want to use shorter if possible for memory and speed reasons.)
4. Add the `[CLS]` and `[SEP]` tokens in the right place.
Word-level and span-level tasks (e.g., SQuAD and NER) are more complex, since
you need to maintain alignment between your input text and output text so that
you can project your training labels. SQuAD is a particularly complex example
because the input labels are *character*-based, and SQuAD paragraphs are often
longer than our maximum sequence length. See the code in `run_squad.py` to show
how we handle this.
Before we describe the general recipe for handling word-level tasks, it's
important to understand what exactly our tokenizer is doing. It has three main
steps:
1. **Text normalization**: Convert all whitespace characters to spaces, and
(for the `Uncased` model) lowercase the input and strip out accent markers.
E.g., `John Johanson's, → john johanson's,`.
2. **Punctuation splitting**: Split *all* punctuation characters on both sides
(i.e., add whitespace around all punctuation characters). Punctuation
characters are defined as (a) Anything with a `P*` Unicode class, (b) any
non-letter/number/space ASCII character (e.g., characters like `$` which are
technically not punctuation). E.g., `john johanson's, → john johanson ' s ,`
3. **WordPiece tokenization**: Apply whitespace tokenization to the output of
the above procedure, and apply
[WordPiece](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/text_encoder.py)
tokenization to each token separately. (Our implementation is directly based
on the one from `tensor2tensor`, which is linked). E.g., `john johanson ' s
, → john johan ##son ' s ,`
The advantage of this scheme is that it is "compatible" with most existing
English tokenizers. For example, imagine that you have a part-of-speech tagging
task which looks like this:
```
Input: John Johanson 's house
Labels: NNP NNP POS NN
```
The tokenized output will look like this:
```
Tokens: john johan ##son ' s house
```
Crucially, this would be the same output as if the raw text were `John
Johanson's house` (with no space before the `'s`).
If you have a pre-tokenized representation with word-level annotations, you can
simply tokenize each input word independently, and deterministically maintain an
original-to-tokenized alignment:
```python
### Input
orig_tokens = ["John", "Johanson", "'s", "house"]
labels = ["NNP", "NNP", "POS", "NN"]
### Output
bert_tokens = []
# Token map will be an int -> int mapping between the `orig_tokens` index and
# the `bert_tokens` index.
orig_to_tok_map = []
tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=True)
bert_tokens.append("[CLS]")
for orig_token in orig_tokens:
orig_to_tok_map.append(len(bert_tokens))
bert_tokens.extend(tokenizer.tokenize(orig_token))
bert_tokens.append("[SEP]")
# bert_tokens == ["[CLS]", "john", "johan", "##son", "'", "s", "house", "[SEP]"]
# orig_to_tok_map == [1, 2, 4, 6]
```
Now `orig_to_tok_map` can be used to project `labels` to the tokenized
representation.
There are common English tokenization schemes which will cause a slight mismatch
between how BERT was pre-trained. For example, if your input tokenization splits
off contractions like `do n't`, this will cause a mismatch. If it is possible to
do so, you should pre-process your data to convert these back to raw-looking
text, but if it's not possible, this mismatch is likely not a big deal.
## Pre-training with BERT
We are releasing code to do "masked LM" and "next sentence prediction" on an
arbitrary text corpus. Note that this is *not* the exact code that was used for
the paper (the original code was written in C++, and had some additional
complexity), but this code does generate pre-training data as described in the
paper.
Here's how to run the data generation. The input is a plain text file, with one
sentence per line. (It is important that these be actual sentences for the "next
sentence prediction" task). Documents are delimited by empty lines. The output
is a set of `tf.train.Example`s serialized into `TFRecord` file format.
You can perform sentence segmentation with an off-the-shelf NLP toolkit such as
[spaCy](https://spacy.io/). The `create_pretraining_data.py` script will
concatenate segments until they reach the maximum sequence length to minimize
computational waste from padding (see the script for more details). However, you
may want to intentionally add a slight amount of noise to your input data (e.g.,
randomly truncate 2% of input segments) to make it more robust to non-sentential
input during fine-tuning.
This script stores all of the examples for the entire input file in memory, so
for large data files you should shard the input file and call the script
multiple times. (You can pass in a file glob to `run_pretraining.py`, e.g.,
`tf_examples.tf_record*`.)
The `max_predictions_per_seq` is the maximum number of masked LM predictions per
sequence. You should set this to around `max_seq_length` * `masked_lm_prob` (the
script doesn't do that automatically because the exact value needs to be passed
to both scripts).
```shell
python create_pretraining_data.py \
--input_file=./sample_text.txt \
--output_file=/tmp/tf_examples.tfrecord \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--do_lower_case=True \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--masked_lm_prob=0.15 \
--random_seed=12345 \
--dupe_factor=5
```
Here's how to run the pre-training. Do not include `init_checkpoint` if you are
pre-training from scratch. The model configuration (including vocab size) is
specified in `bert_config_file`. This demo code only pre-trains for a small
number of steps (20), but in practice you will probably want to set
`num_train_steps` to 10000 steps or more. The `max_seq_length` and
`max_predictions_per_seq` parameters passed to `run_pretraining.py` must be the
same as `create_pretraining_data.py`.
```shell
python run_pretraining.py \
--input_file=/tmp/tf_examples.tfrecord \
--output_dir=/tmp/pretraining_output \
--do_train=True \
--do_eval=True \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--train_batch_size=32 \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--num_train_steps=20 \
--num_warmup_steps=10 \
--learning_rate=2e-5
```
This will produce an output like this:
```
***** Eval results *****
global_step = 20
loss = 0.0979674
masked_lm_accuracy = 0.985479
masked_lm_loss = 0.0979328
next_sentence_accuracy = 1.0
next_sentence_loss = 3.45724e-05
```
Note that since our `sample_text.txt` file is very small, this example training
will overfit that data in only a few steps and produce unrealistically high
accuracy numbers.
### Pre-training tips and caveats
* **If using your own vocabulary, make sure to change `vocab_size` in
`bert_config.json`. If you use a larger vocabulary without changing this,
you will likely get NaNs when training on GPU or TPU due to unchecked
out-of-bounds access.**
* If your task has a large domain-specific corpus available (e.g., "movie
reviews" or "scientific papers"), it will likely be beneficial to run
additional steps of pre-training on your corpus, starting from the BERT
checkpoint.
* The learning rate we used in the paper was 1e-4. However, if you are doing
additional steps of pre-training starting from an existing BERT checkpoint,
you should use a smaller learning rate (e.g., 2e-5).
* Current BERT models are English-only, but we do plan to release a
multilingual model which has been pre-trained on a lot of languages in the
near future (hopefully by the end of November 2018).
* Longer sequences are disproportionately expensive because attention is
quadratic to the sequence length. In other words, a batch of 64 sequences of
length 512 is much more expensive than a batch of 256 sequences of
length 128. The fully-connected/convolutional cost is the same, but the
attention cost is far greater for the 512-length sequences. Therefore, one
good recipe is to pre-train for, say, 90,000 steps with a sequence length of
128 and then for 10,000 additional steps with a sequence length of 512. The
very long sequences are mostly needed to learn positional embeddings, which
can be learned fairly quickly. Note that this does require generating the
data twice with different values of `max_seq_length`.
* If you are pre-training from scratch, be prepared that pre-training is
computationally expensive, especially on GPUs. If you are pre-training from
scratch, our recommended recipe is to pre-train a `BERT-Base` on a single
[preemptible Cloud TPU v2](https://cloud.google.com/tpu/docs/pricing), which
takes about 2 weeks at a cost of about $500 USD (based on the pricing in
October 2018). You will have to scale down the batch size when only training
on a single Cloud TPU, compared to what was used in the paper. It is
recommended to use the largest batch size that fits into TPU memory.
### Pre-training data
We will **not** be able to release the pre-processed datasets used in the paper.
For Wikipedia, the recommended pre-processing is to download
[the latest dump](https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2),
extract the text with
[`WikiExtractor.py`](https://github.com/attardi/wikiextractor), and then apply
any necessary cleanup to convert it into plain text.
Unfortunately the researchers who collected the
[BookCorpus](http://yknzhu.wixsite.com/mbweb) no longer have it available for
public download. The
[Project Guttenberg Dataset](https://web.eecs.umich.edu/~lahiri/gutenberg_dataset.html)
is a somewhat smaller (200M word) collection of older books that are public
domain.
[Common Crawl](http://commoncrawl.org/) is another very large collection of
text, but you will likely have to do substantial pre-processing and cleanup to
extract a usable corpus for pre-training BERT.
### Learning a new WordPiece vocabulary
This repository does not include code for *learning* a new WordPiece vocabulary.
The reason is that the code used in the paper was implemented in C++ with
dependencies on Google's internal libraries. For English, it is almost always
better to just start with our vocabulary and pre-trained models. For learning
vocabularies of other languages, there are a number of open source options
available. However, keep in mind that these are not compatible with our
`tokenization.py` library:
* [Google's SentencePiece library](https://github.com/google/sentencepiece)
* [tensor2tensor's WordPiece generation script](https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/data_generators/text_encoder_build_subword.py)
* [Rico Sennrich's Byte Pair Encoding library](https://github.com/rsennrich/subword-nmt)
## Using BERT in Colab
If you want to use BERT with [Colab](https://colab.research.google.com), you can
get started with the notebook
"[BERT FineTuning with Cloud TPUs](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)".
**At the time of this writing (October 31st, 2018), Colab users can access a
Cloud TPU completely for free.** Note: One per user, availability limited,
requires a Google Cloud Platform account with storage (although storage may be
purchased with free credit for signing up with GCP), and this capability may not
longer be available in the future. Click on the BERT Colab that was just linked
for more information.
## FAQ
#### Is this code compatible with Cloud TPUs? What about GPUs?
Yes, all of the code in this repository works out-of-the-box with CPU, GPU, and
Cloud TPU. However, GPU training is single-GPU only.
#### I am getting out-of-memory errors, what is wrong?
See the section on [out-of-memory issues](#out-of-memory-issues) for more
information.
#### Is there a PyTorch version available?
There is no official PyTorch implementation. However, NLP researchers from
HuggingFace made a
[PyTorch version of BERT available](https://github.com/huggingface/pytorch-pretrained-BERT)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. We were not involved in the creation or maintenance of the PyTorch
implementation so please direct any questions towards the authors of that
repository.
#### Is there a Chainer version available?
There is no official Chainer implementation. However, Sosuke Kobayashi made a
[Chainer version of BERT available](https://github.com/soskek/bert-chainer)
which is compatible with our pre-trained checkpoints and is able to reproduce
our results. We were not involved in the creation or maintenance of the Chainer
implementation so please direct any questions towards the authors of that
repository.
#### Will models in other languages be released?
Yes, we plan to release a multi-lingual BERT model in the near future. We cannot
make promises about exactly which languages will be included, but it will likely
be a single model which includes *most* of the languages which have a
significantly-sized Wikipedia.
#### Will models larger than `BERT-Large` be released?
So far we have not attempted to train anything larger than `BERT-Large`. It is
possible that we will release larger models if we are able to obtain significant
improvements.
#### What license is this library released under?
All code *and* models are released under the Apache 2.0 license. See the
`LICENSE` file for more information.
#### How do I cite BERT?
For now, cite [the Arxiv paper](https://arxiv.org/abs/1810.04805):
```
@article{devlin2018bert,
title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
journal={arXiv preprint arXiv:1810.04805},
year={2018}
}
```
If we submit the paper to a conference or journal, we will update the BibTeX.
## Disclaimer
This is not an official Google product.
## Contact information
For help or issues using BERT, please submit a GitHub issue.
For personal communication related to BERT, please contact Jacob Devlin
(`[email protected]`), Ming-Wei Chang (`[email protected]`), or
Kenton Lee (`[email protected]`).
|
https://github.com/google-research/bert
|
__init__.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
|
https://github.com/google-research/bert
|
create_pretraining_data.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Create masked LM/next sentence masked_lm TF examples for BERT."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import random
import tokenization
import tensorflow as tf
flags = tf.flags
FLAGS = flags.FLAGS
flags.DEFINE_string("input_file", None,
"Input raw text file (or comma-separated list of files).")
flags.DEFINE_string(
"output_file", None,
"Output TF example file (or comma-separated list of files).")
flags.DEFINE_string("vocab_file", None,
"The vocabulary file that the BERT model was trained on.")
flags.DEFINE_bool(
"do_lower_case", True,
"Whether to lower case the input text. Should be True for uncased "
"models and False for cased models.")
flags.DEFINE_bool(
"do_whole_word_mask", False,
"Whether to use whole word masking rather than per-WordPiece masking.")
flags.DEFINE_integer("max_seq_length", 128, "Maximum sequence length.")
flags.DEFINE_integer("max_predictions_per_seq", 20,
"Maximum number of masked LM predictions per sequence.")
flags.DEFINE_integer("random_seed", 12345, "Random seed for data generation.")
flags.DEFINE_integer(
"dupe_factor", 10,
"Number of times to duplicate the input data (with different masks).")
flags.DEFINE_float("masked_lm_prob", 0.15, "Masked LM probability.")
flags.DEFINE_float(
"short_seq_prob", 0.1,
"Probability of creating sequences which are shorter than the "
"maximum length.")
class TrainingInstance(object):
"""A single training instance (sentence pair)."""
def __init__(self, tokens, segment_ids, masked_lm_positions, masked_lm_labels,
is_random_next):
self.tokens = tokens
self.segment_ids = segment_ids
self.is_random_next = is_random_next
self.masked_lm_positions = masked_lm_positions
self.masked_lm_labels = masked_lm_labels
def __str__(self):
s = ""
s += "tokens: %s\n" % (" ".join(
[tokenization.printable_text(x) for x in self.tokens]))
s += "segment_ids: %s\n" % (" ".join([str(x) for x in self.segment_ids]))
s += "is_random_next: %s\n" % self.is_random_next
s += "masked_lm_positions: %s\n" % (" ".join(
[str(x) for x in self.masked_lm_positions]))
s += "masked_lm_labels: %s\n" % (" ".join(
[tokenization.printable_text(x) for x in self.masked_lm_labels]))
s += "\n"
return s
def __repr__(self):
return self.__str__()
def write_instance_to_example_files(instances, tokenizer, max_seq_length,
max_predictions_per_seq, output_files):
"""Create TF example files from `TrainingInstance`s."""
writers = []
for output_file in output_files:
writers.append(tf.python_io.TFRecordWriter(output_file))
writer_index = 0
total_written = 0
for (inst_index, instance) in enumerate(instances):
input_ids = tokenizer.convert_tokens_to_ids(instance.tokens)
input_mask = [1] * len(input_ids)
segment_ids = list(instance.segment_ids)
assert len(input_ids) <= max_seq_length
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
masked_lm_positions = list(instance.masked_lm_positions)
masked_lm_ids = tokenizer.convert_tokens_to_ids(instance.masked_lm_labels)
masked_lm_weights = [1.0] * len(masked_lm_ids)
while len(masked_lm_positions) < max_predictions_per_seq:
masked_lm_positions.append(0)
masked_lm_ids.append(0)
masked_lm_weights.append(0.0)
next_sentence_label = 1 if instance.is_random_next else 0
features = collections.OrderedDict()
features["input_ids"] = create_int_feature(input_ids)
features["input_mask"] = create_int_feature(input_mask)
features["segment_ids"] = create_int_feature(segment_ids)
features["masked_lm_positions"] = create_int_feature(masked_lm_positions)
features["masked_lm_ids"] = create_int_feature(masked_lm_ids)
features["masked_lm_weights"] = create_float_feature(masked_lm_weights)
features["next_sentence_labels"] = create_int_feature([next_sentence_label])
tf_example = tf.train.Example(features=tf.train.Features(feature=features))
writers[writer_index].write(tf_example.SerializeToString())
writer_index = (writer_index + 1) % len(writers)
total_written += 1
if inst_index < 20:
tf.logging.info("*** Example ***")
tf.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in instance.tokens]))
for feature_name in features.keys():
feature = features[feature_name]
values = []
if feature.int64_list.value:
values = feature.int64_list.value
elif feature.float_list.value:
values = feature.float_list.value
tf.logging.info(
"%s: %s" % (feature_name, " ".join([str(x) for x in values])))
for writer in writers:
writer.close()
tf.logging.info("Wrote %d total instances", total_written)
def create_int_feature(values):
feature = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
return feature
def create_float_feature(values):
feature = tf.train.Feature(float_list=tf.train.FloatList(value=list(values)))
return feature
def create_training_instances(input_files, tokenizer, max_seq_length,
dupe_factor, short_seq_prob, masked_lm_prob,
max_predictions_per_seq, rng):
"""Create `TrainingInstance`s from raw text."""
all_documents = [[]]
# Input file format:
# (1) One sentence per line. These should ideally be actual sentences, not
# entire paragraphs or arbitrary spans of text. (Because we use the
# sentence boundaries for the "next sentence prediction" task).
# (2) Blank lines between documents. Document boundaries are needed so
# that the "next sentence prediction" task doesn't span between documents.
for input_file in input_files:
with tf.gfile.GFile(input_file, "r") as reader:
while True:
line = tokenization.convert_to_unicode(reader.readline())
if not line:
break
line = line.strip()
# Empty lines are used as document delimiters
if not line:
all_documents.append([])
tokens = tokenizer.tokenize(line)
if tokens:
all_documents[-1].append(tokens)
# Remove empty documents
all_documents = [x for x in all_documents if x]
rng.shuffle(all_documents)
vocab_words = list(tokenizer.vocab.keys())
instances = []
for _ in range(dupe_factor):
for document_index in range(len(all_documents)):
instances.extend(
create_instances_from_document(
all_documents, document_index, max_seq_length, short_seq_prob,
masked_lm_prob, max_predictions_per_seq, vocab_words, rng))
rng.shuffle(instances)
return instances
def create_instances_from_document(
all_documents, document_index, max_seq_length, short_seq_prob,
masked_lm_prob, max_predictions_per_seq, vocab_words, rng):
"""Creates `TrainingInstance`s for a single document."""
document = all_documents[document_index]
# Account for [CLS], [SEP], [SEP]
max_num_tokens = max_seq_length - 3
# We *usually* want to fill up the entire sequence since we are padding
# to `max_seq_length` anyways, so short sequences are generally wasted
# computation. However, we *sometimes*
# (i.e., short_seq_prob == 0.1 == 10% of the time) want to use shorter
# sequences to minimize the mismatch between pre-training and fine-tuning.
# The `target_seq_length` is just a rough target however, whereas
# `max_seq_length` is a hard limit.
target_seq_length = max_num_tokens
if rng.random() < short_seq_prob:
target_seq_length = rng.randint(2, max_num_tokens)
# We DON'T just concatenate all of the tokens from a document into a long
# sequence and choose an arbitrary split point because this would make the
# next sentence prediction task too easy. Instead, we split the input into
# segments "A" and "B" based on the actual "sentences" provided by the user
# input.
instances = []
current_chunk = []
current_length = 0
i = 0
while i < len(document):
segment = document[i]
current_chunk.append(segment)
current_length += len(segment)
if i == len(document) - 1 or current_length >= target_seq_length:
if current_chunk:
# `a_end` is how many segments from `current_chunk` go into the `A`
# (first) sentence.
a_end = 1
if len(current_chunk) >= 2:
a_end = rng.randint(1, len(current_chunk) - 1)
tokens_a = []
for j in range(a_end):
tokens_a.extend(current_chunk[j])
tokens_b = []
# Random next
is_random_next = False
if len(current_chunk) == 1 or rng.random() < 0.5:
is_random_next = True
target_b_length = target_seq_length - len(tokens_a)
# This should rarely go for more than one iteration for large
# corpora. However, just to be careful, we try to make sure that
# the random document is not the same as the document
# we're processing.
for _ in range(10):
random_document_index = rng.randint(0, len(all_documents) - 1)
if random_document_index != document_index:
break
random_document = all_documents[random_document_index]
random_start = rng.randint(0, len(random_document) - 1)
for j in range(random_start, len(random_document)):
tokens_b.extend(random_document[j])
if len(tokens_b) >= target_b_length:
break
# We didn't actually use these segments so we "put them back" so
# they don't go to waste.
num_unused_segments = len(current_chunk) - a_end
i -= num_unused_segments
# Actual next
else:
is_random_next = False
for j in range(a_end, len(current_chunk)):
tokens_b.extend(current_chunk[j])
truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng)
assert len(tokens_a) >= 1
assert len(tokens_b) >= 1
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
(tokens, masked_lm_positions,
masked_lm_labels) = create_masked_lm_predictions(
tokens, masked_lm_prob, max_predictions_per_seq, vocab_words, rng)
instance = TrainingInstance(
tokens=tokens,
segment_ids=segment_ids,
is_random_next=is_random_next,
masked_lm_positions=masked_lm_positions,
masked_lm_labels=masked_lm_labels)
instances.append(instance)
current_chunk = []
current_length = 0
i += 1
return instances
MaskedLmInstance = collections.namedtuple("MaskedLmInstance",
["index", "label"])
def create_masked_lm_predictions(tokens, masked_lm_prob,
max_predictions_per_seq, vocab_words, rng):
"""Creates the predictions for the masked LM objective."""
cand_indexes = []
for (i, token) in enumerate(tokens):
if token == "[CLS]" or token == "[SEP]":
continue
# Whole Word Masking means that if we mask all of the wordpieces
# corresponding to an original word. When a word has been split into
# WordPieces, the first token does not have any marker and any subsequence
# tokens are prefixed with ##. So whenever we see the ## token, we
# append it to the previous set of word indexes.
#
# Note that Whole Word Masking does *not* change the training code
# at all -- we still predict each WordPiece independently, softmaxed
# over the entire vocabulary.
if (FLAGS.do_whole_word_mask and len(cand_indexes) >= 1 and
token.startswith("##")):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
rng.shuffle(cand_indexes)
output_tokens = list(tokens)
num_to_predict = min(max_predictions_per_seq,
max(1, int(round(len(tokens) * masked_lm_prob))))
masked_lms = []
covered_indexes = set()
for index_set in cand_indexes:
if len(masked_lms) >= num_to_predict:
break
# If adding a whole-word mask would exceed the maximum number of
# predictions, then just skip this candidate.
if len(masked_lms) + len(index_set) > num_to_predict:
continue
is_any_index_covered = False
for index in index_set:
if index in covered_indexes:
is_any_index_covered = True
break
if is_any_index_covered:
continue
for index in index_set:
covered_indexes.add(index)
masked_token = None
# 80% of the time, replace with [MASK]
if rng.random() < 0.8:
masked_token = "[MASK]"
else:
# 10% of the time, keep original
if rng.random() < 0.5:
masked_token = tokens[index]
# 10% of the time, replace with random word
else:
masked_token = vocab_words[rng.randint(0, len(vocab_words) - 1)]
output_tokens[index] = masked_token
masked_lms.append(MaskedLmInstance(index=index, label=tokens[index]))
assert len(masked_lms) <= num_to_predict
masked_lms = sorted(masked_lms, key=lambda x: x.index)
masked_lm_positions = []
masked_lm_labels = []
for p in masked_lms:
masked_lm_positions.append(p.index)
masked_lm_labels.append(p.label)
return (output_tokens, masked_lm_positions, masked_lm_labels)
def truncate_seq_pair(tokens_a, tokens_b, max_num_tokens, rng):
"""Truncates a pair of sequences to a maximum sequence length."""
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_num_tokens:
break
trunc_tokens = tokens_a if len(tokens_a) > len(tokens_b) else tokens_b
assert len(trunc_tokens) >= 1
# We want to sometimes truncate from the front and sometimes from the
# back to add more randomness and avoid biases.
if rng.random() < 0.5:
del trunc_tokens[0]
else:
trunc_tokens.pop()
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
input_files = []
for input_pattern in FLAGS.input_file.split(","):
input_files.extend(tf.gfile.Glob(input_pattern))
tf.logging.info("*** Reading from input files ***")
for input_file in input_files:
tf.logging.info(" %s", input_file)
rng = random.Random(FLAGS.random_seed)
instances = create_training_instances(
input_files, tokenizer, FLAGS.max_seq_length, FLAGS.dupe_factor,
FLAGS.short_seq_prob, FLAGS.masked_lm_prob, FLAGS.max_predictions_per_seq,
rng)
output_files = FLAGS.output_file.split(",")
tf.logging.info("*** Writing to output files ***")
for output_file in output_files:
tf.logging.info(" %s", output_file)
write_instance_to_example_files(instances, tokenizer, FLAGS.max_seq_length,
FLAGS.max_predictions_per_seq, output_files)
if __name__ == "__main__":
flags.mark_flag_as_required("input_file")
flags.mark_flag_as_required("output_file")
flags.mark_flag_as_required("vocab_file")
tf.app.run()
|
https://github.com/google-research/bert
|
extract_features.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Extract pre-computed feature vectors from BERT."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import codecs
import collections
import json
import re
import modeling
import tokenization
import tensorflow as tf
flags = tf.flags
FLAGS = flags.FLAGS
flags.DEFINE_string("input_file", None, "")
flags.DEFINE_string("output_file", None, "")
flags.DEFINE_string("layers", "-1,-2,-3,-4", "")
flags.DEFINE_string(
"bert_config_file", None,
"The config json file corresponding to the pre-trained BERT model. "
"This specifies the model architecture.")
flags.DEFINE_integer(
"max_seq_length", 128,
"The maximum total input sequence length after WordPiece tokenization. "
"Sequences longer than this will be truncated, and sequences shorter "
"than this will be padded.")
flags.DEFINE_string(
"init_checkpoint", None,
"Initial checkpoint (usually from a pre-trained BERT model).")
flags.DEFINE_string("vocab_file", None,
"The vocabulary file that the BERT model was trained on.")
flags.DEFINE_bool(
"do_lower_case", True,
"Whether to lower case the input text. Should be True for uncased "
"models and False for cased models.")
flags.DEFINE_integer("batch_size", 32, "Batch size for predictions.")
flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.")
flags.DEFINE_string("master", None,
"If using a TPU, the address of the master.")
flags.DEFINE_integer(
"num_tpu_cores", 8,
"Only used if `use_tpu` is True. Total number of TPU cores to use.")
flags.DEFINE_bool(
"use_one_hot_embeddings", False,
"If True, tf.one_hot will be used for embedding lookups, otherwise "
"tf.nn.embedding_lookup will be used. On TPUs, this should be True "
"since it is much faster.")
class InputExample(object):
def __init__(self, unique_id, text_a, text_b):
self.unique_id = unique_id
self.text_a = text_a
self.text_b = text_b
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
self.unique_id = unique_id
self.tokens = tokens
self.input_ids = input_ids
self.input_mask = input_mask
self.input_type_ids = input_type_ids
def input_fn_builder(features, seq_length):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
all_unique_ids = []
all_input_ids = []
all_input_mask = []
all_input_type_ids = []
for feature in features:
all_unique_ids.append(feature.unique_id)
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_input_type_ids.append(feature.input_type_ids)
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
num_examples = len(features)
# This is for demo purposes and does NOT scale to large data sets. We do
# not use Dataset.from_generator() because that uses tf.py_func which is
# not TPU compatible. The right way to load data is with TFRecordReader.
d = tf.data.Dataset.from_tensor_slices({
"unique_ids":
tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"input_type_ids":
tf.constant(
all_input_type_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
})
d = d.batch(batch_size=batch_size, drop_remainder=False)
return d
return input_fn
def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
input_type_ids = features["input_type_ids"]
model = modeling.BertModel(
config=bert_config,
is_training=False,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=input_type_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
if mode != tf.estimator.ModeKeys.PREDICT:
raise ValueError("Only PREDICT modes are supported: %s" % (mode))
tvars = tf.trainable_variables()
scaffold_fn = None
(assignment_map,
initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(
tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
tf.logging.info("**** Trainable Variables ****")
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
init_string)
all_layers = model.get_all_encoder_layers()
predictions = {
"unique_id": unique_ids,
}
for (i, layer_index) in enumerate(layer_indexes):
predictions["layer_output_%d" % i] = all_layers[layer_index]
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
return output_spec
return model_fn
def convert_examples_to_features(examples, seq_length, tokenizer):
"""Loads a data file into a list of `InputBatch`s."""
features = []
for (ex_index, example) in enumerate(examples):
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > seq_length - 2:
tokens_a = tokens_a[0:(seq_length - 2)]
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
input_type_ids = []
tokens.append("[CLS]")
input_type_ids.append(0)
for token in tokens_a:
tokens.append(token)
input_type_ids.append(0)
tokens.append("[SEP]")
input_type_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
input_type_ids.append(1)
tokens.append("[SEP]")
input_type_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < seq_length:
input_ids.append(0)
input_mask.append(0)
input_type_ids.append(0)
assert len(input_ids) == seq_length
assert len(input_mask) == seq_length
assert len(input_type_ids) == seq_length
if ex_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("unique_id: %s" % (example.unique_id))
tf.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
tf.logging.info(
"input_type_ids: %s" % " ".join([str(x) for x in input_type_ids]))
features.append(
InputFeatures(
unique_id=example.unique_id,
tokens=tokens,
input_ids=input_ids,
input_mask=input_mask,
input_type_ids=input_type_ids))
return features
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def read_examples(input_file):
"""Read a list of `InputExample`s from an input file."""
examples = []
unique_id = 0
with tf.gfile.GFile(input_file, "r") as reader:
while True:
line = tokenization.convert_to_unicode(reader.readline())
if not line:
break
line = line.strip()
text_a = None
text_b = None
m = re.match(r"^(.*) \|\|\| (.*)$", line)
if m is None:
text_a = line
else:
text_a = m.group(1)
text_b = m.group(2)
examples.append(
InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
unique_id += 1
return examples
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
layer_indexes = [int(x) for x in FLAGS.layers.split(",")]
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
master=FLAGS.master,
tpu_config=tf.contrib.tpu.TPUConfig(
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
examples = read_examples(FLAGS.input_file)
features = convert_examples_to_features(
examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
unique_id_to_feature = {}
for feature in features:
unique_id_to_feature[feature.unique_id] = feature
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=FLAGS.init_checkpoint,
layer_indexes=layer_indexes,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
predict_batch_size=FLAGS.batch_size)
input_fn = input_fn_builder(
features=features, seq_length=FLAGS.max_seq_length)
with codecs.getwriter("utf-8")(tf.gfile.Open(FLAGS.output_file,
"w")) as writer:
for result in estimator.predict(input_fn, yield_single_examples=True):
unique_id = int(result["unique_id"])
feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
all_features = []
for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
layer_output = result["layer_output_%d" % j]
layers = collections.OrderedDict()
layers["index"] = layer_index
layers["values"] = [
round(float(x), 6) for x in layer_output[i:(i + 1)].flat
]
all_layers.append(layers)
features = collections.OrderedDict()
features["token"] = token
features["layers"] = all_layers
all_features.append(features)
output_json["features"] = all_features
writer.write(json.dumps(output_json) + "\n")
if __name__ == "__main__":
flags.mark_flag_as_required("input_file")
flags.mark_flag_as_required("vocab_file")
flags.mark_flag_as_required("bert_config_file")
flags.mark_flag_as_required("init_checkpoint")
flags.mark_flag_as_required("output_file")
tf.app.run()
|
https://github.com/google-research/bert
|
modeling.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""The main BERT model and related functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import copy
import json
import math
import re
import numpy as np
import six
import tensorflow as tf
class BertConfig(object):
"""Configuration for `BertModel`."""
def __init__(self,
vocab_size,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=16,
initializer_range=0.02):
"""Constructs BertConfig.
Args:
vocab_size: Vocabulary size of `inputs_ids` in `BertModel`.
hidden_size: Size of the encoder layers and the pooler layer.
num_hidden_layers: Number of hidden layers in the Transformer encoder.
num_attention_heads: Number of attention heads for each attention layer in
the Transformer encoder.
intermediate_size: The size of the "intermediate" (i.e., feed-forward)
layer in the Transformer encoder.
hidden_act: The non-linear activation function (function or string) in the
encoder and pooler.
hidden_dropout_prob: The dropout probability for all fully connected
layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob: The dropout ratio for the attention
probabilities.
max_position_embeddings: The maximum sequence length that this model might
ever be used with. Typically set this to something large just in case
(e.g., 512 or 1024 or 2048).
type_vocab_size: The vocabulary size of the `token_type_ids` passed into
`BertModel`.
initializer_range: The stdev of the truncated_normal_initializer for
initializing all weight matrices.
"""
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.hidden_act = hidden_act
self.intermediate_size = intermediate_size
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
@classmethod
def from_dict(cls, json_object):
"""Constructs a `BertConfig` from a Python dictionary of parameters."""
config = BertConfig(vocab_size=None)
for (key, value) in six.iteritems(json_object):
config.__dict__[key] = value
return config
@classmethod
def from_json_file(cls, json_file):
"""Constructs a `BertConfig` from a json file of parameters."""
with tf.gfile.GFile(json_file, "r") as reader:
text = reader.read()
return cls.from_dict(json.loads(text))
def to_dict(self):
"""Serializes this instance to a Python dictionary."""
output = copy.deepcopy(self.__dict__)
return output
def to_json_string(self):
"""Serializes this instance to a JSON string."""
return json.dumps(self.to_dict(), indent=2, sort_keys=True) + "\n"
class BertModel(object):
"""BERT model ("Bidirectional Encoder Representations from Transformers").
Example usage:
```python
# Already been converted into WordPiece token ids
input_ids = tf.constant([[31, 51, 99], [15, 5, 0]])
input_mask = tf.constant([[1, 1, 1], [1, 1, 0]])
token_type_ids = tf.constant([[0, 0, 1], [0, 2, 0]])
config = modeling.BertConfig(vocab_size=32000, hidden_size=512,
num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024)
model = modeling.BertModel(config=config, is_training=True,
input_ids=input_ids, input_mask=input_mask, token_type_ids=token_type_ids)
label_embeddings = tf.get_variable(...)
pooled_output = model.get_pooled_output()
logits = tf.matmul(pooled_output, label_embeddings)
...
```
"""
def __init__(self,
config,
is_training,
input_ids,
input_mask=None,
token_type_ids=None,
use_one_hot_embeddings=False,
scope=None):
"""Constructor for BertModel.
Args:
config: `BertConfig` instance.
is_training: bool. true for training model, false for eval model. Controls
whether dropout will be applied.
input_ids: int32 Tensor of shape [batch_size, seq_length].
input_mask: (optional) int32 Tensor of shape [batch_size, seq_length].
token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length].
use_one_hot_embeddings: (optional) bool. Whether to use one-hot word
embeddings or tf.embedding_lookup() for the word embeddings.
scope: (optional) variable scope. Defaults to "bert".
Raises:
ValueError: The config is invalid or one of the input tensor shapes
is invalid.
"""
config = copy.deepcopy(config)
if not is_training:
config.hidden_dropout_prob = 0.0
config.attention_probs_dropout_prob = 0.0
input_shape = get_shape_list(input_ids, expected_rank=2)
batch_size = input_shape[0]
seq_length = input_shape[1]
if input_mask is None:
input_mask = tf.ones(shape=[batch_size, seq_length], dtype=tf.int32)
if token_type_ids is None:
token_type_ids = tf.zeros(shape=[batch_size, seq_length], dtype=tf.int32)
with tf.variable_scope(scope, default_name="bert"):
with tf.variable_scope("embeddings"):
# Perform embedding lookup on the word ids.
(self.embedding_output, self.embedding_table) = embedding_lookup(
input_ids=input_ids,
vocab_size=config.vocab_size,
embedding_size=config.hidden_size,
initializer_range=config.initializer_range,
word_embedding_name="word_embeddings",
use_one_hot_embeddings=use_one_hot_embeddings)
# Add positional embeddings and token type embeddings, then layer
# normalize and perform dropout.
self.embedding_output = embedding_postprocessor(
input_tensor=self.embedding_output,
use_token_type=True,
token_type_ids=token_type_ids,
token_type_vocab_size=config.type_vocab_size,
token_type_embedding_name="token_type_embeddings",
use_position_embeddings=True,
position_embedding_name="position_embeddings",
initializer_range=config.initializer_range,
max_position_embeddings=config.max_position_embeddings,
dropout_prob=config.hidden_dropout_prob)
with tf.variable_scope("encoder"):
# This converts a 2D mask of shape [batch_size, seq_length] to a 3D
# mask of shape [batch_size, seq_length, seq_length] which is used
# for the attention scores.
attention_mask = create_attention_mask_from_input_mask(
input_ids, input_mask)
# Run the stacked transformer.
# `sequence_output` shape = [batch_size, seq_length, hidden_size].
self.all_encoder_layers = transformer_model(
input_tensor=self.embedding_output,
attention_mask=attention_mask,
hidden_size=config.hidden_size,
num_hidden_layers=config.num_hidden_layers,
num_attention_heads=config.num_attention_heads,
intermediate_size=config.intermediate_size,
intermediate_act_fn=get_activation(config.hidden_act),
hidden_dropout_prob=config.hidden_dropout_prob,
attention_probs_dropout_prob=config.attention_probs_dropout_prob,
initializer_range=config.initializer_range,
do_return_all_layers=True)
self.sequence_output = self.all_encoder_layers[-1]
# The "pooler" converts the encoded sequence tensor of shape
# [batch_size, seq_length, hidden_size] to a tensor of shape
# [batch_size, hidden_size]. This is necessary for segment-level
# (or segment-pair-level) classification tasks where we need a fixed
# dimensional representation of the segment.
with tf.variable_scope("pooler"):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token. We assume that this has been pre-trained
first_token_tensor = tf.squeeze(self.sequence_output[:, 0:1, :], axis=1)
self.pooled_output = tf.layers.dense(
first_token_tensor,
config.hidden_size,
activation=tf.tanh,
kernel_initializer=create_initializer(config.initializer_range))
def get_pooled_output(self):
return self.pooled_output
def get_sequence_output(self):
"""Gets final hidden layer of encoder.
Returns:
float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
to the final hidden of the transformer encoder.
"""
return self.sequence_output
def get_all_encoder_layers(self):
return self.all_encoder_layers
def get_embedding_output(self):
"""Gets output of the embedding lookup (i.e., input to the transformer).
Returns:
float Tensor of shape [batch_size, seq_length, hidden_size] corresponding
to the output of the embedding layer, after summing the word
embeddings with the positional embeddings and the token type embeddings,
then performing layer normalization. This is the input to the transformer.
"""
return self.embedding_output
def get_embedding_table(self):
return self.embedding_table
def gelu(x):
"""Gaussian Error Linear Unit.
This is a smoother version of the RELU.
Original paper: https://arxiv.org/abs/1606.08415
Args:
x: float Tensor to perform activation.
Returns:
`x` with the GELU activation applied.
"""
cdf = 0.5 * (1.0 + tf.tanh(
(np.sqrt(2 / np.pi) * (x + 0.044715 * tf.pow(x, 3)))))
return x * cdf
def get_activation(activation_string):
"""Maps a string to a Python function, e.g., "relu" => `tf.nn.relu`.
Args:
activation_string: String name of the activation function.
Returns:
A Python function corresponding to the activation function. If
`activation_string` is None, empty, or "linear", this will return None.
If `activation_string` is not a string, it will return `activation_string`.
Raises:
ValueError: The `activation_string` does not correspond to a known
activation.
"""
# We assume that anything that"s not a string is already an activation
# function, so we just return it.
if not isinstance(activation_string, six.string_types):
return activation_string
if not activation_string:
return None
act = activation_string.lower()
if act == "linear":
return None
elif act == "relu":
return tf.nn.relu
elif act == "gelu":
return gelu
elif act == "tanh":
return tf.tanh
else:
raise ValueError("Unsupported activation: %s" % act)
def get_assignment_map_from_checkpoint(tvars, init_checkpoint):
"""Compute the union of the current variables and checkpoint variables."""
assignment_map = {}
initialized_variable_names = {}
name_to_variable = collections.OrderedDict()
for var in tvars:
name = var.name
m = re.match("^(.*):\\d+$", name)
if m is not None:
name = m.group(1)
name_to_variable[name] = var
init_vars = tf.train.list_variables(init_checkpoint)
assignment_map = collections.OrderedDict()
for x in init_vars:
(name, var) = (x[0], x[1])
if name not in name_to_variable:
continue
assignment_map[name] = name
initialized_variable_names[name] = 1
initialized_variable_names[name + ":0"] = 1
return (assignment_map, initialized_variable_names)
def dropout(input_tensor, dropout_prob):
"""Perform dropout.
Args:
input_tensor: float Tensor.
dropout_prob: Python float. The probability of dropping out a value (NOT of
*keeping* a dimension as in `tf.nn.dropout`).
Returns:
A version of `input_tensor` with dropout applied.
"""
if dropout_prob is None or dropout_prob == 0.0:
return input_tensor
output = tf.nn.dropout(input_tensor, 1.0 - dropout_prob)
return output
def layer_norm(input_tensor, name=None):
"""Run layer normalization on the last dimension of the tensor."""
return tf.contrib.layers.layer_norm(
inputs=input_tensor, begin_norm_axis=-1, begin_params_axis=-1, scope=name)
def layer_norm_and_dropout(input_tensor, dropout_prob, name=None):
"""Runs layer normalization followed by dropout."""
output_tensor = layer_norm(input_tensor, name)
output_tensor = dropout(output_tensor, dropout_prob)
return output_tensor
def create_initializer(initializer_range=0.02):
"""Creates a `truncated_normal_initializer` with the given range."""
return tf.truncated_normal_initializer(stddev=initializer_range)
def embedding_lookup(input_ids,
vocab_size,
embedding_size=128,
initializer_range=0.02,
word_embedding_name="word_embeddings",
use_one_hot_embeddings=False):
"""Looks up words embeddings for id tensor.
Args:
input_ids: int32 Tensor of shape [batch_size, seq_length] containing word
ids.
vocab_size: int. Size of the embedding vocabulary.
embedding_size: int. Width of the word embeddings.
initializer_range: float. Embedding initialization range.
word_embedding_name: string. Name of the embedding table.
use_one_hot_embeddings: bool. If True, use one-hot method for word
embeddings. If False, use `tf.gather()`.
Returns:
float Tensor of shape [batch_size, seq_length, embedding_size].
"""
# This function assumes that the input is of shape [batch_size, seq_length,
# num_inputs].
#
# If the input is a 2D tensor of shape [batch_size, seq_length], we
# reshape to [batch_size, seq_length, 1].
if input_ids.shape.ndims == 2:
input_ids = tf.expand_dims(input_ids, axis=[-1])
embedding_table = tf.get_variable(
name=word_embedding_name,
shape=[vocab_size, embedding_size],
initializer=create_initializer(initializer_range))
flat_input_ids = tf.reshape(input_ids, [-1])
if use_one_hot_embeddings:
one_hot_input_ids = tf.one_hot(flat_input_ids, depth=vocab_size)
output = tf.matmul(one_hot_input_ids, embedding_table)
else:
output = tf.gather(embedding_table, flat_input_ids)
input_shape = get_shape_list(input_ids)
output = tf.reshape(output,
input_shape[0:-1] + [input_shape[-1] * embedding_size])
return (output, embedding_table)
def embedding_postprocessor(input_tensor,
use_token_type=False,
token_type_ids=None,
token_type_vocab_size=16,
token_type_embedding_name="token_type_embeddings",
use_position_embeddings=True,
position_embedding_name="position_embeddings",
initializer_range=0.02,
max_position_embeddings=512,
dropout_prob=0.1):
"""Performs various post-processing on a word embedding tensor.
Args:
input_tensor: float Tensor of shape [batch_size, seq_length,
embedding_size].
use_token_type: bool. Whether to add embeddings for `token_type_ids`.
token_type_ids: (optional) int32 Tensor of shape [batch_size, seq_length].
Must be specified if `use_token_type` is True.
token_type_vocab_size: int. The vocabulary size of `token_type_ids`.
token_type_embedding_name: string. The name of the embedding table variable
for token type ids.
use_position_embeddings: bool. Whether to add position embeddings for the
position of each token in the sequence.
position_embedding_name: string. The name of the embedding table variable
for positional embeddings.
initializer_range: float. Range of the weight initialization.
max_position_embeddings: int. Maximum sequence length that might ever be
used with this model. This can be longer than the sequence length of
input_tensor, but cannot be shorter.
dropout_prob: float. Dropout probability applied to the final output tensor.
Returns:
float tensor with same shape as `input_tensor`.
Raises:
ValueError: One of the tensor shapes or input values is invalid.
"""
input_shape = get_shape_list(input_tensor, expected_rank=3)
batch_size = input_shape[0]
seq_length = input_shape[1]
width = input_shape[2]
output = input_tensor
if use_token_type:
if token_type_ids is None:
raise ValueError("`token_type_ids` must be specified if"
"`use_token_type` is True.")
token_type_table = tf.get_variable(
name=token_type_embedding_name,
shape=[token_type_vocab_size, width],
initializer=create_initializer(initializer_range))
# This vocab will be small so we always do one-hot here, since it is always
# faster for a small vocabulary.
flat_token_type_ids = tf.reshape(token_type_ids, [-1])
one_hot_ids = tf.one_hot(flat_token_type_ids, depth=token_type_vocab_size)
token_type_embeddings = tf.matmul(one_hot_ids, token_type_table)
token_type_embeddings = tf.reshape(token_type_embeddings,
[batch_size, seq_length, width])
output += token_type_embeddings
if use_position_embeddings:
assert_op = tf.assert_less_equal(seq_length, max_position_embeddings)
with tf.control_dependencies([assert_op]):
full_position_embeddings = tf.get_variable(
name=position_embedding_name,
shape=[max_position_embeddings, width],
initializer=create_initializer(initializer_range))
# Since the position embedding table is a learned variable, we create it
# using a (long) sequence length `max_position_embeddings`. The actual
# sequence length might be shorter than this, for faster training of
# tasks that do not have long sequences.
#
# So `full_position_embeddings` is effectively an embedding table
# for position [0, 1, 2, ..., max_position_embeddings-1], and the current
# sequence has positions [0, 1, 2, ... seq_length-1], so we can just
# perform a slice.
position_embeddings = tf.slice(full_position_embeddings, [0, 0],
[seq_length, -1])
num_dims = len(output.shape.as_list())
# Only the last two dimensions are relevant (`seq_length` and `width`), so
# we broadcast among the first dimensions, which is typically just
# the batch size.
position_broadcast_shape = []
for _ in range(num_dims - 2):
position_broadcast_shape.append(1)
position_broadcast_shape.extend([seq_length, width])
position_embeddings = tf.reshape(position_embeddings,
position_broadcast_shape)
output += position_embeddings
output = layer_norm_and_dropout(output, dropout_prob)
return output
def create_attention_mask_from_input_mask(from_tensor, to_mask):
"""Create 3D attention mask from a 2D tensor mask.
Args:
from_tensor: 2D or 3D Tensor of shape [batch_size, from_seq_length, ...].
to_mask: int32 Tensor of shape [batch_size, to_seq_length].
Returns:
float Tensor of shape [batch_size, from_seq_length, to_seq_length].
"""
from_shape = get_shape_list(from_tensor, expected_rank=[2, 3])
batch_size = from_shape[0]
from_seq_length = from_shape[1]
to_shape = get_shape_list(to_mask, expected_rank=2)
to_seq_length = to_shape[1]
to_mask = tf.cast(
tf.reshape(to_mask, [batch_size, 1, to_seq_length]), tf.float32)
# We don't assume that `from_tensor` is a mask (although it could be). We
# don't actually care if we attend *from* padding tokens (only *to* padding)
# tokens so we create a tensor of all ones.
#
# `broadcast_ones` = [batch_size, from_seq_length, 1]
broadcast_ones = tf.ones(
shape=[batch_size, from_seq_length, 1], dtype=tf.float32)
# Here we broadcast along two dimensions to create the mask.
mask = broadcast_ones * to_mask
return mask
def attention_layer(from_tensor,
to_tensor,
attention_mask=None,
num_attention_heads=1,
size_per_head=512,
query_act=None,
key_act=None,
value_act=None,
attention_probs_dropout_prob=0.0,
initializer_range=0.02,
do_return_2d_tensor=False,
batch_size=None,
from_seq_length=None,
to_seq_length=None):
"""Performs multi-headed attention from `from_tensor` to `to_tensor`.
This is an implementation of multi-headed attention based on "Attention
is all you Need". If `from_tensor` and `to_tensor` are the same, then
this is self-attention. Each timestep in `from_tensor` attends to the
corresponding sequence in `to_tensor`, and returns a fixed-with vector.
This function first projects `from_tensor` into a "query" tensor and
`to_tensor` into "key" and "value" tensors. These are (effectively) a list
of tensors of length `num_attention_heads`, where each tensor is of shape
[batch_size, seq_length, size_per_head].
Then, the query and key tensors are dot-producted and scaled. These are
softmaxed to obtain attention probabilities. The value tensors are then
interpolated by these probabilities, then concatenated back to a single
tensor and returned.
In practice, the multi-headed attention are done with transposes and
reshapes rather than actual separate tensors.
Args:
from_tensor: float Tensor of shape [batch_size, from_seq_length,
from_width].
to_tensor: float Tensor of shape [batch_size, to_seq_length, to_width].
attention_mask: (optional) int32 Tensor of shape [batch_size,
from_seq_length, to_seq_length]. The values should be 1 or 0. The
attention scores will effectively be set to -infinity for any positions in
the mask that are 0, and will be unchanged for positions that are 1.
num_attention_heads: int. Number of attention heads.
size_per_head: int. Size of each attention head.
query_act: (optional) Activation function for the query transform.
key_act: (optional) Activation function for the key transform.
value_act: (optional) Activation function for the value transform.
attention_probs_dropout_prob: (optional) float. Dropout probability of the
attention probabilities.
initializer_range: float. Range of the weight initializer.
do_return_2d_tensor: bool. If True, the output will be of shape [batch_size
* from_seq_length, num_attention_heads * size_per_head]. If False, the
output will be of shape [batch_size, from_seq_length, num_attention_heads
* size_per_head].
batch_size: (Optional) int. If the input is 2D, this might be the batch size
of the 3D version of the `from_tensor` and `to_tensor`.
from_seq_length: (Optional) If the input is 2D, this might be the seq length
of the 3D version of the `from_tensor`.
to_seq_length: (Optional) If the input is 2D, this might be the seq length
of the 3D version of the `to_tensor`.
Returns:
float Tensor of shape [batch_size, from_seq_length,
num_attention_heads * size_per_head]. (If `do_return_2d_tensor` is
true, this will be of shape [batch_size * from_seq_length,
num_attention_heads * size_per_head]).
Raises:
ValueError: Any of the arguments or tensor shapes are invalid.
"""
def transpose_for_scores(input_tensor, batch_size, num_attention_heads,
seq_length, width):
output_tensor = tf.reshape(
input_tensor, [batch_size, seq_length, num_attention_heads, width])
output_tensor = tf.transpose(output_tensor, [0, 2, 1, 3])
return output_tensor
from_shape = get_shape_list(from_tensor, expected_rank=[2, 3])
to_shape = get_shape_list(to_tensor, expected_rank=[2, 3])
if len(from_shape) != len(to_shape):
raise ValueError(
"The rank of `from_tensor` must match the rank of `to_tensor`.")
if len(from_shape) == 3:
batch_size = from_shape[0]
from_seq_length = from_shape[1]
to_seq_length = to_shape[1]
elif len(from_shape) == 2:
if (batch_size is None or from_seq_length is None or to_seq_length is None):
raise ValueError(
"When passing in rank 2 tensors to attention_layer, the values "
"for `batch_size`, `from_seq_length`, and `to_seq_length` "
"must all be specified.")
# Scalar dimensions referenced here:
# B = batch size (number of sequences)
# F = `from_tensor` sequence length
# T = `to_tensor` sequence length
# N = `num_attention_heads`
# H = `size_per_head`
from_tensor_2d = reshape_to_matrix(from_tensor)
to_tensor_2d = reshape_to_matrix(to_tensor)
# `query_layer` = [B*F, N*H]
query_layer = tf.layers.dense(
from_tensor_2d,
num_attention_heads * size_per_head,
activation=query_act,
name="query",
kernel_initializer=create_initializer(initializer_range))
# `key_layer` = [B*T, N*H]
key_layer = tf.layers.dense(
to_tensor_2d,
num_attention_heads * size_per_head,
activation=key_act,
name="key",
kernel_initializer=create_initializer(initializer_range))
# `value_layer` = [B*T, N*H]
value_layer = tf.layers.dense(
to_tensor_2d,
num_attention_heads * size_per_head,
activation=value_act,
name="value",
kernel_initializer=create_initializer(initializer_range))
# `query_layer` = [B, N, F, H]
query_layer = transpose_for_scores(query_layer, batch_size,
num_attention_heads, from_seq_length,
size_per_head)
# `key_layer` = [B, N, T, H]
key_layer = transpose_for_scores(key_layer, batch_size, num_attention_heads,
to_seq_length, size_per_head)
# Take the dot product between "query" and "key" to get the raw
# attention scores.
# `attention_scores` = [B, N, F, T]
attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True)
attention_scores = tf.multiply(attention_scores,
1.0 / math.sqrt(float(size_per_head)))
if attention_mask is not None:
# `attention_mask` = [B, 1, F, T]
attention_mask = tf.expand_dims(attention_mask, axis=[1])
# Since attention_mask is 1.0 for positions we want to attend and 0.0 for
# masked positions, this operation will create a tensor which is 0.0 for
# positions we want to attend and -10000.0 for masked positions.
adder = (1.0 - tf.cast(attention_mask, tf.float32)) * -10000.0
# Since we are adding it to the raw scores before the softmax, this is
# effectively the same as removing these entirely.
attention_scores += adder
# Normalize the attention scores to probabilities.
# `attention_probs` = [B, N, F, T]
attention_probs = tf.nn.softmax(attention_scores)
# This is actually dropping out entire tokens to attend to, which might
# seem a bit unusual, but is taken from the original Transformer paper.
attention_probs = dropout(attention_probs, attention_probs_dropout_prob)
# `value_layer` = [B, T, N, H]
value_layer = tf.reshape(
value_layer,
[batch_size, to_seq_length, num_attention_heads, size_per_head])
# `value_layer` = [B, N, T, H]
value_layer = tf.transpose(value_layer, [0, 2, 1, 3])
# `context_layer` = [B, N, F, H]
context_layer = tf.matmul(attention_probs, value_layer)
# `context_layer` = [B, F, N, H]
context_layer = tf.transpose(context_layer, [0, 2, 1, 3])
if do_return_2d_tensor:
# `context_layer` = [B*F, N*H]
context_layer = tf.reshape(
context_layer,
[batch_size * from_seq_length, num_attention_heads * size_per_head])
else:
# `context_layer` = [B, F, N*H]
context_layer = tf.reshape(
context_layer,
[batch_size, from_seq_length, num_attention_heads * size_per_head])
return context_layer
def transformer_model(input_tensor,
attention_mask=None,
hidden_size=768,
num_hidden_layers=12,
num_attention_heads=12,
intermediate_size=3072,
intermediate_act_fn=gelu,
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
initializer_range=0.02,
do_return_all_layers=False):
"""Multi-headed, multi-layer Transformer from "Attention is All You Need".
This is almost an exact implementation of the original Transformer encoder.
See the original paper:
https://arxiv.org/abs/1706.03762
Also see:
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/models/transformer.py
Args:
input_tensor: float Tensor of shape [batch_size, seq_length, hidden_size].
attention_mask: (optional) int32 Tensor of shape [batch_size, seq_length,
seq_length], with 1 for positions that can be attended to and 0 in
positions that should not be.
hidden_size: int. Hidden size of the Transformer.
num_hidden_layers: int. Number of layers (blocks) in the Transformer.
num_attention_heads: int. Number of attention heads in the Transformer.
intermediate_size: int. The size of the "intermediate" (a.k.a., feed
forward) layer.
intermediate_act_fn: function. The non-linear activation function to apply
to the output of the intermediate/feed-forward layer.
hidden_dropout_prob: float. Dropout probability for the hidden layers.
attention_probs_dropout_prob: float. Dropout probability of the attention
probabilities.
initializer_range: float. Range of the initializer (stddev of truncated
normal).
do_return_all_layers: Whether to also return all layers or just the final
layer.
Returns:
float Tensor of shape [batch_size, seq_length, hidden_size], the final
hidden layer of the Transformer.
Raises:
ValueError: A Tensor shape or parameter is invalid.
"""
if hidden_size % num_attention_heads != 0:
raise ValueError(
"The hidden size (%d) is not a multiple of the number of attention "
"heads (%d)" % (hidden_size, num_attention_heads))
attention_head_size = int(hidden_size / num_attention_heads)
input_shape = get_shape_list(input_tensor, expected_rank=3)
batch_size = input_shape[0]
seq_length = input_shape[1]
input_width = input_shape[2]
# The Transformer performs sum residuals on all layers so the input needs
# to be the same as the hidden size.
if input_width != hidden_size:
raise ValueError("The width of the input tensor (%d) != hidden size (%d)" %
(input_width, hidden_size))
# We keep the representation as a 2D tensor to avoid re-shaping it back and
# forth from a 3D tensor to a 2D tensor. Re-shapes are normally free on
# the GPU/CPU but may not be free on the TPU, so we want to minimize them to
# help the optimizer.
prev_output = reshape_to_matrix(input_tensor)
all_layer_outputs = []
for layer_idx in range(num_hidden_layers):
with tf.variable_scope("layer_%d" % layer_idx):
layer_input = prev_output
with tf.variable_scope("attention"):
attention_heads = []
with tf.variable_scope("self"):
attention_head = attention_layer(
from_tensor=layer_input,
to_tensor=layer_input,
attention_mask=attention_mask,
num_attention_heads=num_attention_heads,
size_per_head=attention_head_size,
attention_probs_dropout_prob=attention_probs_dropout_prob,
initializer_range=initializer_range,
do_return_2d_tensor=True,
batch_size=batch_size,
from_seq_length=seq_length,
to_seq_length=seq_length)
attention_heads.append(attention_head)
attention_output = None
if len(attention_heads) == 1:
attention_output = attention_heads[0]
else:
# In the case where we have other sequences, we just concatenate
# them to the self-attention head before the projection.
attention_output = tf.concat(attention_heads, axis=-1)
# Run a linear projection of `hidden_size` then add a residual
# with `layer_input`.
with tf.variable_scope("output"):
attention_output = tf.layers.dense(
attention_output,
hidden_size,
kernel_initializer=create_initializer(initializer_range))
attention_output = dropout(attention_output, hidden_dropout_prob)
attention_output = layer_norm(attention_output + layer_input)
# The activation is only applied to the "intermediate" hidden layer.
with tf.variable_scope("intermediate"):
intermediate_output = tf.layers.dense(
attention_output,
intermediate_size,
activation=intermediate_act_fn,
kernel_initializer=create_initializer(initializer_range))
# Down-project back to `hidden_size` then add the residual.
with tf.variable_scope("output"):
layer_output = tf.layers.dense(
intermediate_output,
hidden_size,
kernel_initializer=create_initializer(initializer_range))
layer_output = dropout(layer_output, hidden_dropout_prob)
layer_output = layer_norm(layer_output + attention_output)
prev_output = layer_output
all_layer_outputs.append(layer_output)
if do_return_all_layers:
final_outputs = []
for layer_output in all_layer_outputs:
final_output = reshape_from_matrix(layer_output, input_shape)
final_outputs.append(final_output)
return final_outputs
else:
final_output = reshape_from_matrix(prev_output, input_shape)
return final_output
def get_shape_list(tensor, expected_rank=None, name=None):
"""Returns a list of the shape of tensor, preferring static dimensions.
Args:
tensor: A tf.Tensor object to find the shape of.
expected_rank: (optional) int. The expected rank of `tensor`. If this is
specified and the `tensor` has a different rank, and exception will be
thrown.
name: Optional name of the tensor for the error message.
Returns:
A list of dimensions of the shape of tensor. All static dimensions will
be returned as python integers, and dynamic dimensions will be returned
as tf.Tensor scalars.
"""
if name is None:
name = tensor.name
if expected_rank is not None:
assert_rank(tensor, expected_rank, name)
shape = tensor.shape.as_list()
non_static_indexes = []
for (index, dim) in enumerate(shape):
if dim is None:
non_static_indexes.append(index)
if not non_static_indexes:
return shape
dyn_shape = tf.shape(tensor)
for index in non_static_indexes:
shape[index] = dyn_shape[index]
return shape
def reshape_to_matrix(input_tensor):
"""Reshapes a >= rank 2 tensor to a rank 2 tensor (i.e., a matrix)."""
ndims = input_tensor.shape.ndims
if ndims < 2:
raise ValueError("Input tensor must have at least rank 2. Shape = %s" %
(input_tensor.shape))
if ndims == 2:
return input_tensor
width = input_tensor.shape[-1]
output_tensor = tf.reshape(input_tensor, [-1, width])
return output_tensor
def reshape_from_matrix(output_tensor, orig_shape_list):
"""Reshapes a rank 2 tensor back to its original rank >= 2 tensor."""
if len(orig_shape_list) == 2:
return output_tensor
output_shape = get_shape_list(output_tensor)
orig_dims = orig_shape_list[0:-1]
width = output_shape[-1]
return tf.reshape(output_tensor, orig_dims + [width])
def assert_rank(tensor, expected_rank, name=None):
"""Raises an exception if the tensor rank is not of the expected rank.
Args:
tensor: A tf.Tensor to check the rank of.
expected_rank: Python integer or list of integers, expected rank.
name: Optional name of the tensor for the error message.
Raises:
ValueError: If the expected shape doesn't match the actual shape.
"""
if name is None:
name = tensor.name
expected_rank_dict = {}
if isinstance(expected_rank, six.integer_types):
expected_rank_dict[expected_rank] = True
else:
for x in expected_rank:
expected_rank_dict[x] = True
actual_rank = tensor.shape.ndims
if actual_rank not in expected_rank_dict:
scope_name = tf.get_variable_scope().name
raise ValueError(
"For the tensor `%s` in scope `%s`, the actual rank "
"`%d` (shape = %s) is not equal to the expected rank `%s`" %
(name, scope_name, actual_rank, str(tensor.shape), str(expected_rank)))
|
https://github.com/google-research/bert
|
modeling_test.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import json
import random
import re
import modeling
import six
import tensorflow as tf
class BertModelTest(tf.test.TestCase):
class BertModelTester(object):
def __init__(self,
parent,
batch_size=13,
seq_length=7,
is_training=True,
use_input_mask=True,
use_token_type_ids=True,
vocab_size=99,
hidden_size=32,
num_hidden_layers=5,
num_attention_heads=4,
intermediate_size=37,
hidden_act="gelu",
hidden_dropout_prob=0.1,
attention_probs_dropout_prob=0.1,
max_position_embeddings=512,
type_vocab_size=16,
initializer_range=0.02,
scope=None):
self.parent = parent
self.batch_size = batch_size
self.seq_length = seq_length
self.is_training = is_training
self.use_input_mask = use_input_mask
self.use_token_type_ids = use_token_type_ids
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.max_position_embeddings = max_position_embeddings
self.type_vocab_size = type_vocab_size
self.initializer_range = initializer_range
self.scope = scope
def create_model(self):
input_ids = BertModelTest.ids_tensor([self.batch_size, self.seq_length],
self.vocab_size)
input_mask = None
if self.use_input_mask:
input_mask = BertModelTest.ids_tensor(
[self.batch_size, self.seq_length], vocab_size=2)
token_type_ids = None
if self.use_token_type_ids:
token_type_ids = BertModelTest.ids_tensor(
[self.batch_size, self.seq_length], self.type_vocab_size)
config = modeling.BertConfig(
vocab_size=self.vocab_size,
hidden_size=self.hidden_size,
num_hidden_layers=self.num_hidden_layers,
num_attention_heads=self.num_attention_heads,
intermediate_size=self.intermediate_size,
hidden_act=self.hidden_act,
hidden_dropout_prob=self.hidden_dropout_prob,
attention_probs_dropout_prob=self.attention_probs_dropout_prob,
max_position_embeddings=self.max_position_embeddings,
type_vocab_size=self.type_vocab_size,
initializer_range=self.initializer_range)
model = modeling.BertModel(
config=config,
is_training=self.is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=token_type_ids,
scope=self.scope)
outputs = {
"embedding_output": model.get_embedding_output(),
"sequence_output": model.get_sequence_output(),
"pooled_output": model.get_pooled_output(),
"all_encoder_layers": model.get_all_encoder_layers(),
}
return outputs
def check_output(self, result):
self.parent.assertAllEqual(
result["embedding_output"].shape,
[self.batch_size, self.seq_length, self.hidden_size])
self.parent.assertAllEqual(
result["sequence_output"].shape,
[self.batch_size, self.seq_length, self.hidden_size])
self.parent.assertAllEqual(result["pooled_output"].shape,
[self.batch_size, self.hidden_size])
def test_default(self):
self.run_tester(BertModelTest.BertModelTester(self))
def test_config_to_json_string(self):
config = modeling.BertConfig(vocab_size=99, hidden_size=37)
obj = json.loads(config.to_json_string())
self.assertEqual(obj["vocab_size"], 99)
self.assertEqual(obj["hidden_size"], 37)
def run_tester(self, tester):
with self.test_session() as sess:
ops = tester.create_model()
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
sess.run(init_op)
output_result = sess.run(ops)
tester.check_output(output_result)
self.assert_all_tensors_reachable(sess, [init_op, ops])
@classmethod
def ids_tensor(cls, shape, vocab_size, rng=None, name=None):
"""Creates a random int32 tensor of the shape within the vocab size."""
if rng is None:
rng = random.Random()
total_dims = 1
for dim in shape:
total_dims *= dim
values = []
for _ in range(total_dims):
values.append(rng.randint(0, vocab_size - 1))
return tf.constant(value=values, dtype=tf.int32, shape=shape, name=name)
def assert_all_tensors_reachable(self, sess, outputs):
"""Checks that all the tensors in the graph are reachable from outputs."""
graph = sess.graph
ignore_strings = [
"^.*/assert_less_equal/.*$",
"^.*/dilation_rate$",
"^.*/Tensordot/concat$",
"^.*/Tensordot/concat/axis$",
"^testing/.*$",
]
ignore_regexes = [re.compile(x) for x in ignore_strings]
unreachable = self.get_unreachable_ops(graph, outputs)
filtered_unreachable = []
for x in unreachable:
do_ignore = False
for r in ignore_regexes:
m = r.match(x.name)
if m is not None:
do_ignore = True
if do_ignore:
continue
filtered_unreachable.append(x)
unreachable = filtered_unreachable
self.assertEqual(
len(unreachable), 0, "The following ops are unreachable: %s" %
(" ".join([x.name for x in unreachable])))
@classmethod
def get_unreachable_ops(cls, graph, outputs):
"""Finds all of the tensors in graph that are unreachable from outputs."""
outputs = cls.flatten_recursive(outputs)
output_to_op = collections.defaultdict(list)
op_to_all = collections.defaultdict(list)
assign_out_to_in = collections.defaultdict(list)
for op in graph.get_operations():
for x in op.inputs:
op_to_all[op.name].append(x.name)
for y in op.outputs:
output_to_op[y.name].append(op.name)
op_to_all[op.name].append(y.name)
if str(op.type) == "Assign":
for y in op.outputs:
for x in op.inputs:
assign_out_to_in[y.name].append(x.name)
assign_groups = collections.defaultdict(list)
for out_name in assign_out_to_in.keys():
name_group = assign_out_to_in[out_name]
for n1 in name_group:
assign_groups[n1].append(out_name)
for n2 in name_group:
if n1 != n2:
assign_groups[n1].append(n2)
seen_tensors = {}
stack = [x.name for x in outputs]
while stack:
name = stack.pop()
if name in seen_tensors:
continue
seen_tensors[name] = True
if name in output_to_op:
for op_name in output_to_op[name]:
if op_name in op_to_all:
for input_name in op_to_all[op_name]:
if input_name not in stack:
stack.append(input_name)
expanded_names = []
if name in assign_groups:
for assign_name in assign_groups[name]:
expanded_names.append(assign_name)
for expanded_name in expanded_names:
if expanded_name not in stack:
stack.append(expanded_name)
unreachable_ops = []
for op in graph.get_operations():
is_unreachable = False
all_names = [x.name for x in op.inputs] + [x.name for x in op.outputs]
for name in all_names:
if name not in seen_tensors:
is_unreachable = True
if is_unreachable:
unreachable_ops.append(op)
return unreachable_ops
@classmethod
def flatten_recursive(cls, item):
"""Flattens (potentially nested) a tuple/dictionary/list to a list."""
output = []
if isinstance(item, list):
output.extend(item)
elif isinstance(item, tuple):
output.extend(list(item))
elif isinstance(item, dict):
for (_, v) in six.iteritems(item):
output.append(v)
else:
return [item]
flat_output = []
for x in output:
flat_output.extend(cls.flatten_recursive(x))
return flat_output
if __name__ == "__main__":
tf.test.main()
|
https://github.com/google-research/bert
|
optimization.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Functions and classes related to optimization (weight updates)."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
import tensorflow as tf
def create_optimizer(loss, init_lr, num_train_steps, num_warmup_steps, use_tpu):
"""Creates an optimizer training op."""
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.constant(value=init_lr, shape=[], dtype=tf.float32)
# Implements linear decay of the learning rate.
learning_rate = tf.train.polynomial_decay(
learning_rate,
global_step,
num_train_steps,
end_learning_rate=0.0,
power=1.0,
cycle=False)
# Implements linear warmup. I.e., if global_step < num_warmup_steps, the
# learning rate will be `global_step/num_warmup_steps * init_lr`.
if num_warmup_steps:
global_steps_int = tf.cast(global_step, tf.int32)
warmup_steps_int = tf.constant(num_warmup_steps, dtype=tf.int32)
global_steps_float = tf.cast(global_steps_int, tf.float32)
warmup_steps_float = tf.cast(warmup_steps_int, tf.float32)
warmup_percent_done = global_steps_float / warmup_steps_float
warmup_learning_rate = init_lr * warmup_percent_done
is_warmup = tf.cast(global_steps_int < warmup_steps_int, tf.float32)
learning_rate = (
(1.0 - is_warmup) * learning_rate + is_warmup * warmup_learning_rate)
# It is recommended that you use this optimizer for fine tuning, since this
# is how the model was trained (note that the Adam m/v variables are NOT
# loaded from init_checkpoint.)
optimizer = AdamWeightDecayOptimizer(
learning_rate=learning_rate,
weight_decay_rate=0.01,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-6,
exclude_from_weight_decay=["LayerNorm", "layer_norm", "bias"])
if use_tpu:
optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer)
tvars = tf.trainable_variables()
grads = tf.gradients(loss, tvars)
# This is how the model was pre-trained.
(grads, _) = tf.clip_by_global_norm(grads, clip_norm=1.0)
train_op = optimizer.apply_gradients(
zip(grads, tvars), global_step=global_step)
# Normally the global step update is done inside of `apply_gradients`.
# However, `AdamWeightDecayOptimizer` doesn't do this. But if you use
# a different optimizer, you should probably take this line out.
new_global_step = global_step + 1
train_op = tf.group(train_op, [global_step.assign(new_global_step)])
return train_op
class AdamWeightDecayOptimizer(tf.train.Optimizer):
"""A basic Adam optimizer that includes "correct" L2 weight decay."""
def __init__(self,
learning_rate,
weight_decay_rate=0.0,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-6,
exclude_from_weight_decay=None,
name="AdamWeightDecayOptimizer"):
"""Constructs a AdamWeightDecayOptimizer."""
super(AdamWeightDecayOptimizer, self).__init__(False, name)
self.learning_rate = learning_rate
self.weight_decay_rate = weight_decay_rate
self.beta_1 = beta_1
self.beta_2 = beta_2
self.epsilon = epsilon
self.exclude_from_weight_decay = exclude_from_weight_decay
def apply_gradients(self, grads_and_vars, global_step=None, name=None):
"""See base class."""
assignments = []
for (grad, param) in grads_and_vars:
if grad is None or param is None:
continue
param_name = self._get_variable_name(param.name)
m = tf.get_variable(
name=param_name + "/adam_m",
shape=param.shape.as_list(),
dtype=tf.float32,
trainable=False,
initializer=tf.zeros_initializer())
v = tf.get_variable(
name=param_name + "/adam_v",
shape=param.shape.as_list(),
dtype=tf.float32,
trainable=False,
initializer=tf.zeros_initializer())
# Standard Adam update.
next_m = (
tf.multiply(self.beta_1, m) + tf.multiply(1.0 - self.beta_1, grad))
next_v = (
tf.multiply(self.beta_2, v) + tf.multiply(1.0 - self.beta_2,
tf.square(grad)))
update = next_m / (tf.sqrt(next_v) + self.epsilon)
# Just adding the square of the weights to the loss function is *not*
# the correct way of using L2 regularization/weight decay with Adam,
# since that will interact with the m and v parameters in strange ways.
#
# Instead we want ot decay the weights in a manner that doesn't interact
# with the m/v parameters. This is equivalent to adding the square
# of the weights to the loss with plain (non-momentum) SGD.
if self._do_use_weight_decay(param_name):
update += self.weight_decay_rate * param
update_with_lr = self.learning_rate * update
next_param = param - update_with_lr
assignments.extend(
[param.assign(next_param),
m.assign(next_m),
v.assign(next_v)])
return tf.group(*assignments, name=name)
def _do_use_weight_decay(self, param_name):
"""Whether to use L2 weight decay for `param_name`."""
if not self.weight_decay_rate:
return False
if self.exclude_from_weight_decay:
for r in self.exclude_from_weight_decay:
if re.search(r, param_name) is not None:
return False
return True
def _get_variable_name(self, param_name):
"""Get the variable name from the tensor name."""
m = re.match("^(.*):\\d+$", param_name)
if m is not None:
param_name = m.group(1)
return param_name
|
https://github.com/google-research/bert
|
optimization_test.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import optimization
import tensorflow as tf
class OptimizationTest(tf.test.TestCase):
def test_adam(self):
with self.test_session() as sess:
w = tf.get_variable(
"w",
shape=[3],
initializer=tf.constant_initializer([0.1, -0.2, -0.1]))
x = tf.constant([0.4, 0.2, -0.5])
loss = tf.reduce_mean(tf.square(x - w))
tvars = tf.trainable_variables()
grads = tf.gradients(loss, tvars)
global_step = tf.train.get_or_create_global_step()
optimizer = optimization.AdamWeightDecayOptimizer(learning_rate=0.2)
train_op = optimizer.apply_gradients(zip(grads, tvars), global_step)
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
sess.run(init_op)
for _ in range(100):
sess.run(train_op)
w_np = sess.run(w)
self.assertAllClose(w_np.flat, [0.4, 0.2, -0.5], rtol=1e-2, atol=1e-2)
if __name__ == "__main__":
tf.test.main()
|
https://github.com/google-research/bert
|
predicting_movie_reviews_with_bert_on_tf_hub.ipynb
|
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Predicting Movie Reviews with BERT on TF Hub.ipynb",
"version": "0.3.2",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"metadata": {
"id": "j0a4mTk9o1Qg",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# Copyright 2019 Google Inc.\n",
"\n",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"\n",
"# http://www.apache.org/licenses/LICENSE-2.0\n",
"\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "dCpvgG0vwXAZ",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#Predicting Movie Review Sentiment with BERT on TF Hub"
]
},
{
"metadata": {
"id": "xiYrZKaHwV81",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"If you’ve been following Natural Language Processing over the past year, you’ve probably heard of BERT: Bidirectional Encoder Representations from Transformers. It’s a neural network architecture designed by Google researchers that’s totally transformed what’s state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.\n",
"\n",
"Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases.\n",
"\n",
"Here, we'll train a model to predict whether an IMDB movie review is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started!"
]
},
{
"metadata": {
"id": "hsZvic2YxnTz",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"from sklearn.model_selection import train_test_split\n",
"import pandas as pd\n",
"import tensorflow as tf\n",
"import tensorflow_hub as hub\n",
"from datetime import datetime"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "cp5wfXDx5SPH",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"In addition to the standard libraries we imported above, we'll need to install BERT's python package."
]
},
{
"metadata": {
"id": "jviywGyWyKsA",
"colab_type": "code",
"outputId": "166f3005-d219-404f-b201-2a0b75480360",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 51
}
},
"cell_type": "code",
"source": [
"!pip install bert-tensorflow"
],
"execution_count": 38,
"outputs": [
{
"output_type": "stream",
"text": [
"Requirement already satisfied: bert-tensorflow in /usr/local/lib/python3.6/dist-packages (1.0.1)\n",
"Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from bert-tensorflow) (1.11.0)\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "hhbGEfwgdEtw",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"import bert\n",
"from bert import run_classifier\n",
"from bert import optimization\n",
"from bert import tokenization"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "KVB3eOcjxxm1",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.\n",
"\n",
"Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.\n",
"\n",
"Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist)."
]
},
{
"metadata": {
"id": "US_EAnICvP7f",
"colab_type": "code",
"outputId": "7780a032-31d4-4794-e6aa-664a5d2ae7dd",
"cellView": "form",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"cell_type": "code",
"source": [
"# Set the output directory for saving model file\n",
"# Optionally, set a GCP bucket location\n",
"\n",
"OUTPUT_DIR = 'OUTPUT_DIR_NAME'#@param {type:\"string\"}\n",
"#@markdown Whether or not to clear/delete the directory and create a new one\n",
"DO_DELETE = False #@param {type:\"boolean\"}\n",
"#@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket.\n",
"USE_BUCKET = True #@param {type:\"boolean\"}\n",
"BUCKET = 'BUCKET_NAME' #@param {type:\"string\"}\n",
"\n",
"if USE_BUCKET:\n",
" OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)\n",
" from google.colab import auth\n",
" auth.authenticate_user()\n",
"\n",
"if DO_DELETE:\n",
" try:\n",
" tf.gfile.DeleteRecursively(OUTPUT_DIR)\n",
" except:\n",
" # Doesn't matter if the directory didn't exist\n",
" pass\n",
"tf.gfile.MakeDirs(OUTPUT_DIR)\n",
"print('***** Model output directory: {} *****'.format(OUTPUT_DIR))\n"
],
"execution_count": 40,
"outputs": [
{
"output_type": "stream",
"text": [
"***** Model output directory: gs://bert-tfhub/aclImdb_v1 *****\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "pmFYvkylMwXn",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#Data"
]
},
{
"metadata": {
"id": "MC_w8SRqN0fr",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub)."
]
},
{
"metadata": {
"id": "fom_ff20gyy6",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"from tensorflow import keras\n",
"import os\n",
"import re\n",
"\n",
"# Load all files from a directory in a DataFrame.\n",
"def load_directory_data(directory):\n",
" data = {}\n",
" data[\"sentence\"] = []\n",
" data[\"sentiment\"] = []\n",
" for file_path in os.listdir(directory):\n",
" with tf.gfile.GFile(os.path.join(directory, file_path), \"r\") as f:\n",
" data[\"sentence\"].append(f.read())\n",
" data[\"sentiment\"].append(re.match(\"\\d+_(\\d+)\\.txt\", file_path).group(1))\n",
" return pd.DataFrame.from_dict(data)\n",
"\n",
"# Merge positive and negative examples, add a polarity column and shuffle.\n",
"def load_dataset(directory):\n",
" pos_df = load_directory_data(os.path.join(directory, \"pos\"))\n",
" neg_df = load_directory_data(os.path.join(directory, \"neg\"))\n",
" pos_df[\"polarity\"] = 1\n",
" neg_df[\"polarity\"] = 0\n",
" return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)\n",
"\n",
"# Download and process the dataset files.\n",
"def download_and_load_datasets(force_download=False):\n",
" dataset = tf.keras.utils.get_file(\n",
" fname=\"aclImdb.tar.gz\", \n",
" origin=\"http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\", \n",
" extract=True)\n",
" \n",
" train_df = load_dataset(os.path.join(os.path.dirname(dataset), \n",
" \"aclImdb\", \"train\"))\n",
" test_df = load_dataset(os.path.join(os.path.dirname(dataset), \n",
" \"aclImdb\", \"test\"))\n",
" \n",
" return train_df, test_df\n"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "2abfwdn-g135",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"train, test = download_and_load_datasets()"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "XA8WHJgzhIZf",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"To keep training fast, we'll take a sample of 5000 train and test examples, respectively."
]
},
{
"metadata": {
"id": "lw_F488eixTV",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"train = train.sample(5000)\n",
"test = test.sample(5000)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "prRQM8pDi8xI",
"colab_type": "code",
"outputId": "34445cb8-2be0-4379-fdbc-7794091f6049",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"cell_type": "code",
"source": [
"train.columns"
],
"execution_count": 44,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"Index(['sentence', 'sentiment', 'polarity'], dtype='object')"
]
},
"metadata": {
"tags": []
},
"execution_count": 44
}
]
},
{
"metadata": {
"id": "sfRnHSz3iSXz",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)"
]
},
{
"metadata": {
"id": "IuMOGwFui4it",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"DATA_COLUMN = 'sentence'\n",
"LABEL_COLUMN = 'polarity'\n",
"# label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat'\n",
"label_list = [0, 1]"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "V399W0rqNJ-Z",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#Data Preprocessing\n",
"We'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library.\n",
"\n",
"- `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe. \n",
"- `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank.\n",
"- `label` is the label for our example, i.e. True, False"
]
},
{
"metadata": {
"id": "p9gEt5SmM6i6",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# Use the InputExample class from BERT's run_classifier code to create examples from the data\n",
"train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example\n",
" text_a = x[DATA_COLUMN], \n",
" text_b = None, \n",
" label = x[LABEL_COLUMN]), axis = 1)\n",
"\n",
"test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None, \n",
" text_a = x[DATA_COLUMN], \n",
" text_b = None, \n",
" label = x[LABEL_COLUMN]), axis = 1)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "SCZWZtKxObjh",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):\n",
"\n",
"\n",
"1. Lowercase our text (if we're using a BERT lowercase model)\n",
"2. Tokenize it (i.e. \"sally says hi\" -> [\"sally\", \"says\", \"hi\"])\n",
"3. Break words into WordPieces (i.e. \"calling\" -> [\"call\", \"##ing\"])\n",
"4. Map our words to indexes using a vocab file that BERT provides\n",
"5. Add special \"CLS\" and \"SEP\" tokens (see the [readme](https://github.com/google-research/bert))\n",
"6. Append \"index\" and \"segment\" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))\n",
"\n",
"Happily, we don't have to worry about most of these details.\n",
"\n",
"\n"
]
},
{
"metadata": {
"id": "qMWiDtpyQSoU",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:"
]
},
{
"metadata": {
"id": "IhJSe0QHNG7U",
"colab_type": "code",
"outputId": "20b28cc7-3cb3-4ce6-bfff-a7847ce3bbaa",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 34
}
},
"cell_type": "code",
"source": [
"# This is a path to an uncased (all lowercase) version of BERT\n",
"BERT_MODEL_HUB = \"https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1\"\n",
"\n",
"def create_tokenizer_from_hub_module():\n",
" \"\"\"Get the vocab file and casing info from the Hub module.\"\"\"\n",
" with tf.Graph().as_default():\n",
" bert_module = hub.Module(BERT_MODEL_HUB)\n",
" tokenization_info = bert_module(signature=\"tokenization_info\", as_dict=True)\n",
" with tf.Session() as sess:\n",
" vocab_file, do_lower_case = sess.run([tokenization_info[\"vocab_file\"],\n",
" tokenization_info[\"do_lower_case\"]])\n",
" \n",
" return bert.tokenization.FullTokenizer(\n",
" vocab_file=vocab_file, do_lower_case=do_lower_case)\n",
"\n",
"tokenizer = create_tokenizer_from_hub_module()"
],
"execution_count": 47,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "z4oFkhpZBDKm",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info[\"do_lower_case\"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:"
]
},
{
"metadata": {
"id": "dsBo6RCtQmwx",
"colab_type": "code",
"outputId": "9af8c917-90ec-4fe9-897b-79dc89ca88e1",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 221
}
},
"cell_type": "code",
"source": [
"tokenizer.tokenize(\"This here's an example of using the BERT tokenizer\")"
],
"execution_count": 48,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"['this',\n",
" 'here',\n",
" \"'\",\n",
" 's',\n",
" 'an',\n",
" 'example',\n",
" 'of',\n",
" 'using',\n",
" 'the',\n",
" 'bert',\n",
" 'token',\n",
" '##izer']"
]
},
"metadata": {
"tags": []
},
"execution_count": 48
}
]
},
{
"metadata": {
"id": "0OEzfFIt6GIc",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands."
]
},
{
"metadata": {
"id": "LL5W8gEGRTAf",
"colab_type": "code",
"outputId": "65001dda-155b-48fc-b5fc-1e4cabc8dfbf",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1261
}
},
"cell_type": "code",
"source": [
"# We'll set sequences to be at most 128 tokens long.\n",
"MAX_SEQ_LENGTH = 128\n",
"# Convert our train and test features to InputFeatures that BERT understands.\n",
"train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)\n",
"test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)"
],
"execution_count": 49,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Writing example 0 of 5000\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] i ' m watching this on the sci - fi channel right now . it ' s so horrible i can ' t stop watching it ! i ' m a video ##grapher and this movie makes me sad . i feel bad for anyone associated with this movie . some of the camera work is good . most is very questionable . there are a few decent actors in the flick . too bad they ' re surrounded by what must have been the director ' s relatives . that ' s the only way they could have been qualified to be in a movie ! music was a little better than the acting . if you get around to watching this i hope it [SEP]\n",
"INFO:tensorflow:input_ids: 101 1045 1005 1049 3666 2023 2006 1996 16596 1011 10882 3149 2157 2085 1012 2009 1005 1055 2061 9202 1045 2064 1005 1056 2644 3666 2009 999 1045 1005 1049 1037 2678 18657 1998 2023 3185 3084 2033 6517 1012 1045 2514 2919 2005 3087 3378 2007 2023 3185 1012 2070 1997 1996 4950 2147 2003 2204 1012 2087 2003 2200 21068 1012 2045 2024 1037 2261 11519 5889 1999 1996 17312 1012 2205 2919 2027 1005 2128 5129 2011 2054 2442 2031 2042 1996 2472 1005 1055 9064 1012 2008 1005 1055 1996 2069 2126 2027 2071 2031 2042 4591 2000 2022 1999 1037 3185 999 2189 2001 1037 2210 2488 2084 1996 3772 1012 2065 2017 2131 2105 2000 3666 2023 1045 3246 2009 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] i have been a fan of pushing dai ##sies since the very beginning . it is wonderful ##ly thought up , and bryan fuller has the most remarkable ideas for this show . < br / > < br / > it is unbelievable on how much tv has been needing a creative , original show like pushing dai ##sies . it is a huge relief to see a show , that is unlike the rest , where as , if you compared it to some of the newer shows , such as scrub ##s and house , you would see the similarities , and it does get ted ##ious at moments to see shows so close in identity . < br / > < br [SEP]\n",
"INFO:tensorflow:input_ids: 101 1045 2031 2042 1037 5470 1997 6183 18765 14625 2144 1996 2200 2927 1012 2009 2003 6919 2135 2245 2039 1010 1998 8527 12548 2038 1996 2087 9487 4784 2005 2023 2265 1012 1026 7987 1013 1028 1026 7987 1013 1028 2009 2003 23653 2006 2129 2172 2694 2038 2042 11303 1037 5541 1010 2434 2265 2066 6183 18765 14625 1012 2009 2003 1037 4121 4335 2000 2156 1037 2265 1010 2008 2003 4406 1996 2717 1010 2073 2004 1010 2065 2017 4102 2009 2000 2070 1997 1996 10947 3065 1010 2107 2004 18157 2015 1998 2160 1010 2017 2052 2156 1996 12319 1010 1998 2009 2515 2131 6945 6313 2012 5312 2000 2156 3065 2061 2485 1999 4767 1012 1026 7987 1013 1028 1026 7987 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 1 (id = 1)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] this movie starts out promising ##ly , with an early scene in which frank morgan advises against gary cooper ' s marriage to his daughter , anita louise . frank morgan , playing an una ##bas ##hed gold - digger , loudly complain ##s to cooper about his perceived pen ##ury at the hands of his family - including his daughter , anita louise . i am a fan of all 3 actors . frank morgan is ( to my mind ) a hollywood treasure , cooper a legend , and louise a very lovely , versatile and under - appreciated actress seldom seen in the leading role . i also have nothing against teresa wright , and while not blessed with great range , she [SEP]\n",
"INFO:tensorflow:input_ids: 101 2023 3185 4627 2041 10015 2135 1010 2007 2019 2220 3496 1999 2029 3581 5253 25453 2114 5639 6201 1005 1055 3510 2000 2010 2684 1010 12918 8227 1012 3581 5253 1010 2652 2019 14477 22083 9072 2751 1011 28661 1010 9928 17612 2015 2000 6201 2055 2010 8690 7279 13098 2012 1996 2398 1997 2010 2155 1011 2164 2010 2684 1010 12918 8227 1012 1045 2572 1037 5470 1997 2035 1017 5889 1012 3581 5253 2003 1006 2000 2026 2568 1007 1037 5365 8813 1010 6201 1037 5722 1010 1998 8227 1037 2200 8403 1010 22979 1998 2104 1011 12315 3883 15839 2464 1999 1996 2877 2535 1012 1045 2036 2031 2498 2114 12409 6119 1010 1998 2096 2025 10190 2007 2307 2846 1010 2016 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] i was over ##taken by the emotion . un ##for ##get ##table rendering of a wartime story which is unknown to most people . the performances were fault ##less and outstanding . [SEP]\n",
"INFO:tensorflow:input_ids: 101 1045 2001 2058 25310 2011 1996 7603 1012 4895 29278 18150 10880 14259 1997 1037 12498 2466 2029 2003 4242 2000 2087 2111 1012 1996 4616 2020 6346 3238 1998 5151 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 1 (id = 1)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] soldier blue is a movie with pre ##tension ##s : pre ##tension ##s to be some sort of profound statement on man ' s inhuman ##ity to man , on the white man ' s exploitation of and brutality towards indigenous peoples ; a biting , un ##fl ##in ##ching and sar ##don ##ic commentary on the horrors of vietnam . well , sorry , but it fails mis ##era ##bly to be any of those things . what soldier blue actually is is per ##nic ##ious , tri ##te , badly made , dish ##ones ##t rubbish . < br / > < br / > another reviewer here hit the nail on the head in saying that it appears to be a hybrid of [SEP]\n",
"INFO:tensorflow:input_ids: 101 5268 2630 2003 1037 3185 2007 3653 29048 2015 1024 3653 29048 2015 2000 2022 2070 4066 1997 13769 4861 2006 2158 1005 1055 29582 3012 2000 2158 1010 2006 1996 2317 2158 1005 1055 14427 1997 1998 24083 2875 6284 7243 1025 1037 12344 1010 4895 10258 2378 8450 1998 18906 5280 2594 8570 2006 1996 22812 1997 5148 1012 2092 1010 3374 1010 2021 2009 11896 28616 6906 6321 2000 2022 2151 1997 2216 2477 1012 2054 5268 2630 2941 2003 2003 2566 8713 6313 1010 13012 2618 1010 6649 2081 1010 9841 21821 2102 29132 1012 1026 7987 1013 1028 1026 7987 1013 1028 2178 12027 2182 2718 1996 13774 2006 1996 2132 1999 3038 2008 2009 3544 2000 2022 1037 8893 1997 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:Writing example 0 of 5000\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] i just watched this today on tv . it was on abc ' s sunday afternoon movie . < br / > < br / > this wasn ' t a very good movie , but for a low budget independent film like this , it was okay . there is some suspense in it , but there are so many bad qualities that really bring the movie down . the script is pretty lame , and the plot elements aren ' t very realistic , such as the way a 911 operator would laugh and hang up when someone is reporting a murder . i don ' t know what the writer was thinking when they came up with that idea , but it isn [SEP]\n",
"INFO:tensorflow:input_ids: 101 1045 2074 3427 2023 2651 2006 2694 1012 2009 2001 2006 5925 1005 1055 4465 5027 3185 1012 1026 7987 1013 1028 1026 7987 1013 1028 2023 2347 1005 1056 1037 2200 2204 3185 1010 2021 2005 1037 2659 5166 2981 2143 2066 2023 1010 2009 2001 3100 1012 2045 2003 2070 23873 1999 2009 1010 2021 2045 2024 2061 2116 2919 11647 2008 2428 3288 1996 3185 2091 1012 1996 5896 2003 3492 20342 1010 1998 1996 5436 3787 4995 1005 1056 2200 12689 1010 2107 2004 1996 2126 1037 19989 6872 2052 4756 1998 6865 2039 2043 2619 2003 7316 1037 4028 1012 1045 2123 1005 1056 2113 2054 1996 3213 2001 3241 2043 2027 2234 2039 2007 2008 2801 1010 2021 2009 3475 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] from hardly alien sounding lasers , to an elementary school style shuttle crash , \" night ##be ##ast \" is better classified as a far ##cic ##al mix of fake blood and bare chest . the almost pornographic style of the film seems to be a failed attempt to recover from a lack of co ##hesive or effective story . the acting however is not nearly as beast ##ly , many of the young , aspiring , actors ad ##mir ##ably showcase a hidden talent . particularly don lei ##fer ##t and jamie ze ##mare ##l , who shed a well needed sha ##rd of light on this otherwise terrible film . night ##be ##ast would have never shown up on set had he known the [SEP]\n",
"INFO:tensorflow:input_ids: 101 2013 6684 7344 9391 23965 1010 2000 2019 4732 2082 2806 10382 5823 1010 1000 2305 4783 14083 1000 2003 2488 6219 2004 1037 2521 19053 2389 4666 1997 8275 2668 1998 6436 3108 1012 1996 2471 26932 2806 1997 1996 2143 3849 2000 2022 1037 3478 3535 2000 8980 2013 1037 3768 1997 2522 21579 2030 4621 2466 1012 1996 3772 2174 2003 2025 3053 2004 6841 2135 1010 2116 1997 1996 2402 1010 22344 1010 5889 4748 14503 8231 13398 1037 5023 5848 1012 3391 2123 26947 7512 2102 1998 6175 27838 24376 2140 1010 2040 8328 1037 2092 2734 21146 4103 1997 2422 2006 2023 4728 6659 2143 1012 2305 4783 14083 2052 2031 2196 3491 2039 2006 2275 2018 2002 2124 1996 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] here we have the in ##imi ##table charlie chaplin for ##sa ##king his slap ##stick past to tackle the serious subject of anti - semi ##tism , and into ##ler ##ance in general . he portrays two characters - the sweet , innocent jewish barber - a war veteran , and the ravi ##ng and ruthless dictator , aden ##oid h ##yn ##kel . the jewish ghetto in this country is not safe for long , due to the w ##him ##s of h ##yn ##kel and his armed thugs , who routinely rough up its residents , or leave them alone , dependent upon his mood that day or week . the barber is among them , but is befriended by his former commanding officer [SEP]\n",
"INFO:tensorflow:input_ids: 101 2182 2057 2031 1996 1999 27605 10880 4918 23331 2005 3736 6834 2010 14308 21354 2627 2000 11147 1996 3809 3395 1997 3424 1011 4100 17456 1010 1998 2046 3917 6651 1999 2236 1012 2002 17509 2048 3494 1011 1996 4086 1010 7036 3644 13362 1011 1037 2162 8003 1010 1998 1996 16806 3070 1998 18101 21237 1010 16298 9314 1044 6038 11705 1012 1996 3644 17276 1999 2023 2406 2003 2025 3647 2005 2146 1010 2349 2000 1996 1059 14341 2015 1997 1044 6038 11705 1998 2010 4273 24106 1010 2040 19974 5931 2039 2049 3901 1010 2030 2681 2068 2894 1010 7790 2588 2010 6888 2008 2154 2030 2733 1012 1996 13362 2003 2426 2068 1010 2021 2003 23386 2011 2010 2280 7991 2961 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 1 (id = 1)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] i really hated this movie and it ' s the first movie written by stephen king that i didn ' t finish . i was truly disappointed , it was the worst crap i ' ve ever seen . what were you thinking making three hours out of it ? it may have a quite good story , but actors ? no . suspense ? no . romance ? no . horror ? no . it didn ' t have anything . < br / > < br / > it ' s got this strange , crazy science man with einstein - hair , the classic thing . not real at all . and a man keep getting younger all the time . it seems [SEP]\n",
"INFO:tensorflow:input_ids: 101 1045 2428 6283 2023 3185 1998 2009 1005 1055 1996 2034 3185 2517 2011 4459 2332 2008 1045 2134 1005 1056 3926 1012 1045 2001 5621 9364 1010 2009 2001 1996 5409 10231 1045 1005 2310 2412 2464 1012 2054 2020 2017 3241 2437 2093 2847 2041 1997 2009 1029 2009 2089 2031 1037 3243 2204 2466 1010 2021 5889 1029 2053 1012 23873 1029 2053 1012 7472 1029 2053 1012 5469 1029 2053 1012 2009 2134 1005 1056 2031 2505 1012 1026 7987 1013 1028 1026 7987 1013 1028 2009 1005 1055 2288 2023 4326 1010 4689 2671 2158 2007 15313 1011 2606 1010 1996 4438 2518 1012 2025 2613 2012 2035 1012 1998 1037 2158 2562 2893 3920 2035 1996 2051 1012 2009 3849 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: None\n",
"INFO:tensorflow:tokens: [CLS] story chinese tall story tells the story of righteous monk trip ##ita ##ka , who , along with his guardians monkey , sandy and pigs ##y make their journey west on a quest to recover ancient sutra ##s , finally , they reach the final leg of their journey in sha ##che city but all is not as it seems when the city is attacked by evil tree demons . monkey tries his best to battle them but is overwhelmed , knowing his master is in grave danger , he uses his trust ##y golden staff to thrust trip ##ita ##ka to safety . < br / > < br / > the monk ends up being knocked out when he land and when he wakes [SEP]\n",
"INFO:tensorflow:input_ids: 101 2466 2822 4206 2466 4136 1996 2466 1997 19556 8284 4440 6590 2912 1010 2040 1010 2247 2007 2010 14240 10608 1010 7525 1998 14695 2100 2191 2037 4990 2225 2006 1037 8795 2000 8980 3418 26567 2015 1010 2633 1010 2027 3362 1996 2345 4190 1997 2037 4990 1999 21146 5403 2103 2021 2035 2003 2025 2004 2009 3849 2043 1996 2103 2003 4457 2011 4763 3392 7942 1012 10608 5363 2010 2190 2000 2645 2068 2021 2003 13394 1010 4209 2010 3040 2003 1999 6542 5473 1010 2002 3594 2010 3404 2100 3585 3095 2000 7400 4440 6590 2912 2000 3808 1012 1026 7987 1013 1028 1026 7987 1013 1028 1996 8284 4515 2039 2108 6573 2041 2043 2002 2455 1998 2043 2002 17507 102\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 1 (id = 1)\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "ccp5trMwRtmr",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"#Creating a model\n",
"\n",
"Now that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning)."
]
},
{
"metadata": {
"id": "6o2a5ZIvRcJq",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,\n",
" num_labels):\n",
" \"\"\"Creates a classification model.\"\"\"\n",
"\n",
" bert_module = hub.Module(\n",
" BERT_MODEL_HUB,\n",
" trainable=True)\n",
" bert_inputs = dict(\n",
" input_ids=input_ids,\n",
" input_mask=input_mask,\n",
" segment_ids=segment_ids)\n",
" bert_outputs = bert_module(\n",
" inputs=bert_inputs,\n",
" signature=\"tokens\",\n",
" as_dict=True)\n",
"\n",
" # Use \"pooled_output\" for classification tasks on an entire sentence.\n",
" # Use \"sequence_outputs\" for token-level output.\n",
" output_layer = bert_outputs[\"pooled_output\"]\n",
"\n",
" hidden_size = output_layer.shape[-1].value\n",
"\n",
" # Create our own layer to tune for politeness data.\n",
" output_weights = tf.get_variable(\n",
" \"output_weights\", [num_labels, hidden_size],\n",
" initializer=tf.truncated_normal_initializer(stddev=0.02))\n",
"\n",
" output_bias = tf.get_variable(\n",
" \"output_bias\", [num_labels], initializer=tf.zeros_initializer())\n",
"\n",
" with tf.variable_scope(\"loss\"):\n",
"\n",
" # Dropout helps prevent overfitting\n",
" output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)\n",
"\n",
" logits = tf.matmul(output_layer, output_weights, transpose_b=True)\n",
" logits = tf.nn.bias_add(logits, output_bias)\n",
" log_probs = tf.nn.log_softmax(logits, axis=-1)\n",
"\n",
" # Convert labels into one-hot encoding\n",
" one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)\n",
"\n",
" predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))\n",
" # If we're predicting, we want predicted labels and the probabiltiies.\n",
" if is_predicting:\n",
" return (predicted_labels, log_probs)\n",
"\n",
" # If we're train/eval, compute loss between predicted and actual label\n",
" per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)\n",
" loss = tf.reduce_mean(per_example_loss)\n",
" return (loss, predicted_labels, log_probs)\n"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "qpE0ZIDOCQzE",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction."
]
},
{
"metadata": {
"id": "FnH-AnOQ9KKW",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# model_fn_builder actually creates our model function\n",
"# using the passed parameters for num_labels, learning_rate, etc.\n",
"def model_fn_builder(num_labels, learning_rate, num_train_steps,\n",
" num_warmup_steps):\n",
" \"\"\"Returns `model_fn` closure for TPUEstimator.\"\"\"\n",
" def model_fn(features, labels, mode, params): # pylint: disable=unused-argument\n",
" \"\"\"The `model_fn` for TPUEstimator.\"\"\"\n",
"\n",
" input_ids = features[\"input_ids\"]\n",
" input_mask = features[\"input_mask\"]\n",
" segment_ids = features[\"segment_ids\"]\n",
" label_ids = features[\"label_ids\"]\n",
"\n",
" is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)\n",
" \n",
" # TRAIN and EVAL\n",
" if not is_predicting:\n",
"\n",
" (loss, predicted_labels, log_probs) = create_model(\n",
" is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)\n",
"\n",
" train_op = bert.optimization.create_optimizer(\n",
" loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)\n",
"\n",
" # Calculate evaluation metrics. \n",
" def metric_fn(label_ids, predicted_labels):\n",
" accuracy = tf.metrics.accuracy(label_ids, predicted_labels)\n",
" f1_score = tf.contrib.metrics.f1_score(\n",
" label_ids,\n",
" predicted_labels)\n",
" auc = tf.metrics.auc(\n",
" label_ids,\n",
" predicted_labels)\n",
" recall = tf.metrics.recall(\n",
" label_ids,\n",
" predicted_labels)\n",
" precision = tf.metrics.precision(\n",
" label_ids,\n",
" predicted_labels) \n",
" true_pos = tf.metrics.true_positives(\n",
" label_ids,\n",
" predicted_labels)\n",
" true_neg = tf.metrics.true_negatives(\n",
" label_ids,\n",
" predicted_labels) \n",
" false_pos = tf.metrics.false_positives(\n",
" label_ids,\n",
" predicted_labels) \n",
" false_neg = tf.metrics.false_negatives(\n",
" label_ids,\n",
" predicted_labels)\n",
" return {\n",
" \"eval_accuracy\": accuracy,\n",
" \"f1_score\": f1_score,\n",
" \"auc\": auc,\n",
" \"precision\": precision,\n",
" \"recall\": recall,\n",
" \"true_positives\": true_pos,\n",
" \"true_negatives\": true_neg,\n",
" \"false_positives\": false_pos,\n",
" \"false_negatives\": false_neg\n",
" }\n",
"\n",
" eval_metrics = metric_fn(label_ids, predicted_labels)\n",
"\n",
" if mode == tf.estimator.ModeKeys.TRAIN:\n",
" return tf.estimator.EstimatorSpec(mode=mode,\n",
" loss=loss,\n",
" train_op=train_op)\n",
" else:\n",
" return tf.estimator.EstimatorSpec(mode=mode,\n",
" loss=loss,\n",
" eval_metric_ops=eval_metrics)\n",
" else:\n",
" (predicted_labels, log_probs) = create_model(\n",
" is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)\n",
"\n",
" predictions = {\n",
" 'probabilities': log_probs,\n",
" 'labels': predicted_labels\n",
" }\n",
" return tf.estimator.EstimatorSpec(mode, predictions=predictions)\n",
"\n",
" # Return the actual model function in the closure\n",
" return model_fn\n"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "OjwJ4bTeWXD8",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# Compute train and warmup steps from batch size\n",
"# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)\n",
"BATCH_SIZE = 32\n",
"LEARNING_RATE = 2e-5\n",
"NUM_TRAIN_EPOCHS = 3.0\n",
"# Warmup is a period of time where hte learning rate \n",
"# is small and gradually increases--usually helps training.\n",
"WARMUP_PROPORTION = 0.1\n",
"# Model configs\n",
"SAVE_CHECKPOINTS_STEPS = 500\n",
"SAVE_SUMMARY_STEPS = 100"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "emHf9GhfWBZ_",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# Compute # train and warmup steps from batch size\n",
"num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)\n",
"num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "oEJldMr3WYZa",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# Specify outpit directory and number of checkpoint steps to save\n",
"run_config = tf.estimator.RunConfig(\n",
" model_dir=OUTPUT_DIR,\n",
" save_summary_steps=SAVE_SUMMARY_STEPS,\n",
" save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "q_WebpS1X97v",
"colab_type": "code",
"outputId": "1648932a-7391-49d3-8af7-52d514e226e8",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 156
}
},
"cell_type": "code",
"source": [
"model_fn = model_fn_builder(\n",
" num_labels=len(label_list),\n",
" learning_rate=LEARNING_RATE,\n",
" num_train_steps=num_train_steps,\n",
" num_warmup_steps=num_warmup_steps)\n",
"\n",
"estimator = tf.estimator.Estimator(\n",
" model_fn=model_fn,\n",
" config=run_config,\n",
" params={\"batch_size\": BATCH_SIZE})\n"
],
"execution_count": 55,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Using config: {'_model_dir': 'gs://bert-tfhub/aclImdb_v1', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 500, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true\n",
"graph_options {\n",
" rewrite_options {\n",
" meta_optimizer_iterations: ONE\n",
" }\n",
"}\n",
", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fcedb507be0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "NOO3RfG1DYLo",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators)."
]
},
{
"metadata": {
"id": "1Pv2bAlOX_-K",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"# Create an input function for training. drop_remainder = True for using TPUs.\n",
"train_input_fn = bert.run_classifier.input_fn_builder(\n",
" features=train_features,\n",
" seq_length=MAX_SEQ_LENGTH,\n",
" is_training=True,\n",
" drop_remainder=False)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "t6Nukby2EB6-",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes."
]
},
{
"metadata": {
"id": "nucD4gluYJmK",
"colab_type": "code",
"outputId": "5d728e72-4631-42bf-c48d-3f51d4b968ce",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 68
}
},
"cell_type": "code",
"source": [
"print(f'Beginning Training!')\n",
"current_time = datetime.now()\n",
"estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)\n",
"print(\"Training took time \", datetime.now() - current_time)"
],
"execution_count": 57,
"outputs": [
{
"output_type": "stream",
"text": [
"Beginning Training!\n",
"INFO:tensorflow:Skipping training since max_steps has already saved.\n",
"Training took time 0:00:00.759709\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "CmbLTVniARy3",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Now let's use our test data to see how well our model did:"
]
},
{
"metadata": {
"id": "JIhejfpyJ8Bx",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"test_input_fn = run_classifier.input_fn_builder(\n",
" features=test_features,\n",
" seq_length=MAX_SEQ_LENGTH,\n",
" is_training=False,\n",
" drop_remainder=False)"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "PPVEXhNjYXC-",
"colab_type": "code",
"outputId": "dd5482cd-c558-465f-c854-ec11a0175316",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 445
}
},
"cell_type": "code",
"source": [
"estimator.evaluate(input_fn=test_input_fn, steps=None)"
],
"execution_count": 59,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Calling model_fn.\n",
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
],
"name": "stdout"
},
{
"output_type": "stream",
"text": [
"/usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/gradients_impl.py:110: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.\n",
" \"Converting sparse IndexedSlices to a dense Tensor of unknown shape. \"\n"
],
"name": "stderr"
},
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Done calling model_fn.\n",
"INFO:tensorflow:Starting evaluation at 2019-02-12T21:04:20Z\n",
"INFO:tensorflow:Graph was finalized.\n",
"INFO:tensorflow:Restoring parameters from gs://bert-tfhub/aclImdb_v1/model.ckpt-468\n",
"INFO:tensorflow:Running local_init_op.\n",
"INFO:tensorflow:Done running local_init_op.\n",
"INFO:tensorflow:Finished evaluation at 2019-02-12-21:06:05\n",
"INFO:tensorflow:Saving dict for global step 468: auc = 0.86659324, eval_accuracy = 0.8664, f1_score = 0.8659711, false_negatives = 375.0, false_positives = 293.0, global_step = 468, loss = 0.51870537, precision = 0.880457, recall = 0.8519542, true_negatives = 2174.0, true_positives = 2158.0\n",
"INFO:tensorflow:Saving 'checkpoint_path' summary for global step 468: gs://bert-tfhub/aclImdb_v1/model.ckpt-468\n"
],
"name": "stdout"
},
{
"output_type": "execute_result",
"data": {
"text/plain": [
"{'auc': 0.86659324,\n",
" 'eval_accuracy': 0.8664,\n",
" 'f1_score': 0.8659711,\n",
" 'false_negatives': 375.0,\n",
" 'false_positives': 293.0,\n",
" 'global_step': 468,\n",
" 'loss': 0.51870537,\n",
" 'precision': 0.880457,\n",
" 'recall': 0.8519542,\n",
" 'true_negatives': 2174.0,\n",
" 'true_positives': 2158.0}"
]
},
"metadata": {
"tags": []
},
"execution_count": 59
}
]
},
{
"metadata": {
"id": "ueKsULteiz1B",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Now let's write code to make predictions on new sentences:"
]
},
{
"metadata": {
"id": "OsrbTD2EJTVl",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"def getPrediction(in_sentences):\n",
" labels = [\"Negative\", \"Positive\"]\n",
" input_examples = [run_classifier.InputExample(guid=\"\", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, \"\" is just a dummy label\n",
" input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)\n",
" predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)\n",
" predictions = estimator.predict(predict_input_fn)\n",
" return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "-thbodgih_VJ",
"colab_type": "code",
"colab": {}
},
"cell_type": "code",
"source": [
"pred_sentences = [\n",
" \"That movie was absolutely awful\",\n",
" \"The acting was a bit lacking\",\n",
" \"The film was creative and surprising\",\n",
" \"Absolutely fantastic!\"\n",
"]"
],
"execution_count": 0,
"outputs": []
},
{
"metadata": {
"id": "QrZmvZySKQTm",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 649
},
"outputId": "3891fafb-a460-4eb8-fa6c-335a5bbc10e5"
},
"cell_type": "code",
"source": [
"predictions = getPrediction(pred_sentences)"
],
"execution_count": 72,
"outputs": [
{
"output_type": "stream",
"text": [
"INFO:tensorflow:Writing example 0 of 4\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: \n",
"INFO:tensorflow:tokens: [CLS] that movie was absolutely awful [SEP]\n",
"INFO:tensorflow:input_ids: 101 2008 3185 2001 7078 9643 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: \n",
"INFO:tensorflow:tokens: [CLS] the acting was a bit lacking [SEP]\n",
"INFO:tensorflow:input_ids: 101 1996 3772 2001 1037 2978 11158 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: \n",
"INFO:tensorflow:tokens: [CLS] the film was creative and surprising [SEP]\n",
"INFO:tensorflow:input_ids: 101 1996 2143 2001 5541 1998 11341 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:*** Example ***\n",
"INFO:tensorflow:guid: \n",
"INFO:tensorflow:tokens: [CLS] absolutely fantastic ! [SEP]\n",
"INFO:tensorflow:input_ids: 101 7078 10392 999 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:input_mask: 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n",
"INFO:tensorflow:label: 0 (id = 0)\n",
"INFO:tensorflow:Calling model_fn.\n",
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n",
"INFO:tensorflow:Done calling model_fn.\n",
"INFO:tensorflow:Graph was finalized.\n",
"INFO:tensorflow:Restoring parameters from gs://bert-tfhub/aclImdb_v1/model.ckpt-468\n",
"INFO:tensorflow:Running local_init_op.\n",
"INFO:tensorflow:Done running local_init_op.\n"
],
"name": "stdout"
}
]
},
{
"metadata": {
"id": "MXkRiEBUqN3n",
"colab_type": "text"
},
"cell_type": "markdown",
"source": [
"Voila! We have a sentiment classifier!"
]
},
{
"metadata": {
"id": "ERkTE8-7oQLZ",
"colab_type": "code",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 221
},
"outputId": "26c33224-dc2c-4b3d-f7b4-ac3ef0a58b27"
},
"cell_type": "code",
"source": [
"predictions"
],
"execution_count": 73,
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": [
"[('That movie was absolutely awful',\n",
" array([-4.9142293e-03, -5.3180690e+00], dtype=float32),\n",
" 'Negative'),\n",
" ('The acting was a bit lacking',\n",
" array([-0.03325794, -3.4200459 ], dtype=float32),\n",
" 'Negative'),\n",
" ('The film was creative and surprising',\n",
" array([-5.3589125e+00, -4.7171740e-03], dtype=float32),\n",
" 'Positive'),\n",
" ('Absolutely fantastic!',\n",
" array([-5.0434084 , -0.00647258], dtype=float32),\n",
" 'Positive')]"
]
},
"metadata": {
"tags": []
},
"execution_count": 73
}
]
}
]
}
|
https://github.com/google-research/bert
|
run_classifier.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BERT finetuning runner."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import csv
import os
import modeling
import optimization
import tokenization
import tensorflow as tf
flags = tf.flags
FLAGS = flags.FLAGS
## Required parameters
flags.DEFINE_string(
"data_dir", None,
"The input data dir. Should contain the .tsv files (or other data files) "
"for the task.")
flags.DEFINE_string(
"bert_config_file", None,
"The config json file corresponding to the pre-trained BERT model. "
"This specifies the model architecture.")
flags.DEFINE_string("task_name", None, "The name of the task to train.")
flags.DEFINE_string("vocab_file", None,
"The vocabulary file that the BERT model was trained on.")
flags.DEFINE_string(
"output_dir", None,
"The output directory where the model checkpoints will be written.")
## Other parameters
flags.DEFINE_string(
"init_checkpoint", None,
"Initial checkpoint (usually from a pre-trained BERT model).")
flags.DEFINE_bool(
"do_lower_case", True,
"Whether to lower case the input text. Should be True for uncased "
"models and False for cased models.")
flags.DEFINE_integer(
"max_seq_length", 128,
"The maximum total input sequence length after WordPiece tokenization. "
"Sequences longer than this will be truncated, and sequences shorter "
"than this will be padded.")
flags.DEFINE_bool("do_train", False, "Whether to run training.")
flags.DEFINE_bool("do_eval", False, "Whether to run eval on the dev set.")
flags.DEFINE_bool(
"do_predict", False,
"Whether to run the model in inference mode on the test set.")
flags.DEFINE_integer("train_batch_size", 32, "Total batch size for training.")
flags.DEFINE_integer("eval_batch_size", 8, "Total batch size for eval.")
flags.DEFINE_integer("predict_batch_size", 8, "Total batch size for predict.")
flags.DEFINE_float("learning_rate", 5e-5, "The initial learning rate for Adam.")
flags.DEFINE_float("num_train_epochs", 3.0,
"Total number of training epochs to perform.")
flags.DEFINE_float(
"warmup_proportion", 0.1,
"Proportion of training to perform linear learning rate warmup for. "
"E.g., 0.1 = 10% of training.")
flags.DEFINE_integer("save_checkpoints_steps", 1000,
"How often to save the model checkpoint.")
flags.DEFINE_integer("iterations_per_loop", 1000,
"How many steps to make in each estimator call.")
flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.")
tf.flags.DEFINE_string(
"tpu_name", None,
"The Cloud TPU to use for training. This should be either the name "
"used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 "
"url.")
tf.flags.DEFINE_string(
"tpu_zone", None,
"[Optional] GCE zone where the Cloud TPU is located in. If not "
"specified, we will attempt to automatically detect the GCE project from "
"metadata.")
tf.flags.DEFINE_string(
"gcp_project", None,
"[Optional] Project name for the Cloud TPU-enabled project. If not "
"specified, we will attempt to automatically detect the GCE project from "
"metadata.")
tf.flags.DEFINE_string("master", None, "[Optional] TensorFlow master URL.")
flags.DEFINE_integer(
"num_tpu_cores", 8,
"Only used if `use_tpu` is True. Total number of TPU cores to use.")
class InputExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self, guid, text_a, text_b=None, label=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.label = label
class PaddingInputExample(object):
"""Fake example so the num input examples is a multiple of the batch size.
When running eval/predict on the TPU, we need to pad the number of examples
to be a multiple of the batch size, because the TPU requires a fixed batch
size. The alternative is to drop the last batch, which is bad because it means
the entire output data won't be generated.
We use this class instead of `None` because treating `None` as padding
battches could cause silent errors.
"""
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self,
input_ids,
input_mask,
segment_ids,
label_id,
is_real_example=True):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
self.is_real_example = is_real_example
class DataProcessor(object):
"""Base class for data converters for sequence classification data sets."""
def get_train_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the train set."""
raise NotImplementedError()
def get_dev_examples(self, data_dir):
"""Gets a collection of `InputExample`s for the dev set."""
raise NotImplementedError()
def get_test_examples(self, data_dir):
"""Gets a collection of `InputExample`s for prediction."""
raise NotImplementedError()
def get_labels(self):
"""Gets the list of labels for this data set."""
raise NotImplementedError()
@classmethod
def _read_tsv(cls, input_file, quotechar=None):
"""Reads a tab separated value file."""
with tf.gfile.Open(input_file, "r") as f:
reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
lines = []
for line in reader:
lines.append(line)
return lines
class XnliProcessor(DataProcessor):
"""Processor for the XNLI data set."""
def __init__(self):
self.language = "zh"
def get_train_examples(self, data_dir):
"""See base class."""
lines = self._read_tsv(
os.path.join(data_dir, "multinli",
"multinli.train.%s.tsv" % self.language))
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "train-%d" % (i)
text_a = tokenization.convert_to_unicode(line[0])
text_b = tokenization.convert_to_unicode(line[1])
label = tokenization.convert_to_unicode(line[2])
if label == tokenization.convert_to_unicode("contradictory"):
label = tokenization.convert_to_unicode("contradiction")
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_dev_examples(self, data_dir):
"""See base class."""
lines = self._read_tsv(os.path.join(data_dir, "xnli.dev.tsv"))
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "dev-%d" % (i)
language = tokenization.convert_to_unicode(line[0])
if language != tokenization.convert_to_unicode(self.language):
continue
text_a = tokenization.convert_to_unicode(line[6])
text_b = tokenization.convert_to_unicode(line[7])
label = tokenization.convert_to_unicode(line[1])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
class MnliProcessor(DataProcessor):
"""Processor for the MultiNLI data set (GLUE version)."""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")),
"dev_matched")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test_matched.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["contradiction", "entailment", "neutral"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "%s-%s" % (set_type, tokenization.convert_to_unicode(line[0]))
text_a = tokenization.convert_to_unicode(line[8])
text_b = tokenization.convert_to_unicode(line[9])
if set_type == "test":
label = "contradiction"
else:
label = tokenization.convert_to_unicode(line[-1])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class MrpcProcessor(DataProcessor):
"""Processor for the MRPC data set (GLUE version)."""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
if i == 0:
continue
guid = "%s-%s" % (set_type, i)
text_a = tokenization.convert_to_unicode(line[3])
text_b = tokenization.convert_to_unicode(line[4])
if set_type == "test":
label = "0"
else:
label = tokenization.convert_to_unicode(line[0])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label))
return examples
class ColaProcessor(DataProcessor):
"""Processor for the CoLA data set (GLUE version)."""
def get_train_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv")), "train")
def get_dev_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv")), "dev")
def get_test_examples(self, data_dir):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "test.tsv")), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(lines):
# Only the test set has a header
if set_type == "test" and i == 0:
continue
guid = "%s-%s" % (set_type, i)
if set_type == "test":
text_a = tokenization.convert_to_unicode(line[1])
label = "0"
else:
text_a = tokenization.convert_to_unicode(line[3])
label = tokenization.convert_to_unicode(line[1])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=None, label=label))
return examples
def convert_single_example(ex_index, example, label_list, max_seq_length,
tokenizer):
"""Converts a single `InputExample` into a single `InputFeatures`."""
if isinstance(example, PaddingInputExample):
return InputFeatures(
input_ids=[0] * max_seq_length,
input_mask=[0] * max_seq_length,
segment_ids=[0] * max_seq_length,
label_id=0,
is_real_example=False)
label_map = {}
for (i, label) in enumerate(label_list):
label_map[label] = i
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label_map[example.label]
if ex_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("guid: %s" % (example.guid))
tf.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
tf.logging.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
tf.logging.info("label: %s (id = %d)" % (example.label, label_id))
feature = InputFeatures(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id,
is_real_example=True)
return feature
def file_based_convert_examples_to_features(
examples, label_list, max_seq_length, tokenizer, output_file):
"""Convert a set of `InputExample`s to a TFRecord file."""
writer = tf.python_io.TFRecordWriter(output_file)
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
tf.logging.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, label_list,
max_seq_length, tokenizer)
def create_int_feature(values):
f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
return f
features = collections.OrderedDict()
features["input_ids"] = create_int_feature(feature.input_ids)
features["input_mask"] = create_int_feature(feature.input_mask)
features["segment_ids"] = create_int_feature(feature.segment_ids)
features["label_ids"] = create_int_feature([feature.label_id])
features["is_real_example"] = create_int_feature(
[int(feature.is_real_example)])
tf_example = tf.train.Example(features=tf.train.Features(feature=features))
writer.write(tf_example.SerializeToString())
writer.close()
def file_based_input_fn_builder(input_file, seq_length, is_training,
drop_remainder):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
name_to_features = {
"input_ids": tf.FixedLenFeature([seq_length], tf.int64),
"input_mask": tf.FixedLenFeature([seq_length], tf.int64),
"segment_ids": tf.FixedLenFeature([seq_length], tf.int64),
"label_ids": tf.FixedLenFeature([], tf.int64),
"is_real_example": tf.FixedLenFeature([], tf.int64),
}
def _decode_record(record, name_to_features):
"""Decodes a record to a TensorFlow example."""
example = tf.parse_single_example(record, name_to_features)
# tf.Example only supports tf.int64, but the TPU only supports tf.int32.
# So cast all int64 to int32.
for name in list(example.keys()):
t = example[name]
if t.dtype == tf.int64:
t = tf.to_int32(t)
example[name] = t
return example
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
# For training, we want a lot of parallel reading and shuffling.
# For eval, we want no shuffling and parallel reading doesn't matter.
d = tf.data.TFRecordDataset(input_file)
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.apply(
tf.contrib.data.map_and_batch(
lambda record: _decode_record(record, name_to_features),
batch_size=batch_size,
drop_remainder=drop_remainder))
return d
return input_fn
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def create_model(bert_config, is_training, input_ids, input_mask, segment_ids,
labels, num_labels, use_one_hot_embeddings):
"""Creates a classification model."""
model = modeling.BertModel(
config=bert_config,
is_training=is_training,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=segment_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
# In the demo, we are doing a simple classification task on the entire
# segment.
#
# If you want to use the token-level output, use model.get_sequence_output()
# instead.
output_layer = model.get_pooled_output()
hidden_size = output_layer.shape[-1].value
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
if is_training:
# I.e., 0.1 dropout
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, per_example_loss, logits, probabilities)
def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,
num_train_steps, num_warmup_steps, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
tf.logging.info("*** Features ***")
for name in sorted(features.keys()):
tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_real_example = None
if "is_real_example" in features:
is_real_example = tf.cast(features["is_real_example"], dtype=tf.float32)
else:
is_real_example = tf.ones(tf.shape(label_ids), dtype=tf.float32)
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
(total_loss, per_example_loss, logits, probabilities) = create_model(
bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,
num_labels, use_one_hot_embeddings)
tvars = tf.trainable_variables()
initialized_variable_names = {}
scaffold_fn = None
if init_checkpoint:
(assignment_map, initialized_variable_names
) = modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
tf.logging.info("**** Trainable Variables ****")
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
init_string)
output_spec = None
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op,
scaffold_fn=scaffold_fn)
elif mode == tf.estimator.ModeKeys.EVAL:
def metric_fn(per_example_loss, label_ids, logits, is_real_example):
predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
accuracy = tf.metrics.accuracy(
labels=label_ids, predictions=predictions, weights=is_real_example)
loss = tf.metrics.mean(values=per_example_loss, weights=is_real_example)
return {
"eval_accuracy": accuracy,
"eval_loss": loss,
}
eval_metrics = (metric_fn,
[per_example_loss, label_ids, logits, is_real_example])
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
eval_metrics=eval_metrics,
scaffold_fn=scaffold_fn)
else:
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
predictions={"probabilities": probabilities},
scaffold_fn=scaffold_fn)
return output_spec
return model_fn
# This function is not used by this file but is still used by the Colab and
# people who depend on it.
def input_fn_builder(features, seq_length, is_training, drop_remainder):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
all_input_ids = []
all_input_mask = []
all_segment_ids = []
all_label_ids = []
for feature in features:
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_segment_ids.append(feature.segment_ids)
all_label_ids.append(feature.label_id)
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
num_examples = len(features)
# This is for demo purposes and does NOT scale to large data sets. We do
# not use Dataset.from_generator() because that uses tf.py_func which is
# not TPU compatible. The right way to load data is with TFRecordReader.
d = tf.data.Dataset.from_tensor_slices({
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"segment_ids":
tf.constant(
all_segment_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
"label_ids":
tf.constant(all_label_ids, shape=[num_examples], dtype=tf.int32),
})
if is_training:
d = d.repeat()
d = d.shuffle(buffer_size=100)
d = d.batch(batch_size=batch_size, drop_remainder=drop_remainder)
return d
return input_fn
# This function is not used by this file but is still used by the Colab and
# people who depend on it.
def convert_examples_to_features(examples, label_list, max_seq_length,
tokenizer):
"""Convert a set of `InputExample`s to a list of `InputFeatures`."""
features = []
for (ex_index, example) in enumerate(examples):
if ex_index % 10000 == 0:
tf.logging.info("Writing example %d of %d" % (ex_index, len(examples)))
feature = convert_single_example(ex_index, example, label_list,
max_seq_length, tokenizer)
features.append(feature)
return features
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
processors = {
"cola": ColaProcessor,
"mnli": MnliProcessor,
"mrpc": MrpcProcessor,
"xnli": XnliProcessor,
}
tokenization.validate_case_matches_checkpoint(FLAGS.do_lower_case,
FLAGS.init_checkpoint)
if not FLAGS.do_train and not FLAGS.do_eval and not FLAGS.do_predict:
raise ValueError(
"At least one of `do_train`, `do_eval` or `do_predict' must be True.")
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
if FLAGS.max_seq_length > bert_config.max_position_embeddings:
raise ValueError(
"Cannot use sequence length %d because the BERT model "
"was only trained up to sequence length %d" %
(FLAGS.max_seq_length, bert_config.max_position_embeddings))
tf.gfile.MakeDirs(FLAGS.output_dir)
task_name = FLAGS.task_name.lower()
if task_name not in processors:
raise ValueError("Task not found: %s" % (task_name))
processor = processors[task_name]()
label_list = processor.get_labels()
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
tpu_cluster_resolver = None
if FLAGS.use_tpu and FLAGS.tpu_name:
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=FLAGS.master,
model_dir=FLAGS.output_dir,
save_checkpoints_steps=FLAGS.save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
train_examples = None
num_train_steps = None
num_warmup_steps = None
if FLAGS.do_train:
train_examples = processor.get_train_examples(FLAGS.data_dir)
num_train_steps = int(
len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
model_fn = model_fn_builder(
bert_config=bert_config,
num_labels=len(label_list),
init_checkpoint=FLAGS.init_checkpoint,
learning_rate=FLAGS.learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_tpu)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=FLAGS.train_batch_size,
eval_batch_size=FLAGS.eval_batch_size,
predict_batch_size=FLAGS.predict_batch_size)
if FLAGS.do_train:
train_file = os.path.join(FLAGS.output_dir, "train.tf_record")
file_based_convert_examples_to_features(
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = file_based_input_fn_builder(
input_file=train_file,
seq_length=FLAGS.max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
if FLAGS.do_eval:
eval_examples = processor.get_dev_examples(FLAGS.data_dir)
num_actual_eval_examples = len(eval_examples)
if FLAGS.use_tpu:
# TPU requires a fixed batch size for all batches, therefore the number
# of examples must be a multiple of the batch size, or else examples
# will get dropped. So we pad with fake examples which are ignored
# later on. These do NOT count towards the metric (all tf.metrics
# support a per-instance weight, and these get a weight of 0.0).
while len(eval_examples) % FLAGS.eval_batch_size != 0:
eval_examples.append(PaddingInputExample())
eval_file = os.path.join(FLAGS.output_dir, "eval.tf_record")
file_based_convert_examples_to_features(
eval_examples, label_list, FLAGS.max_seq_length, tokenizer, eval_file)
tf.logging.info("***** Running evaluation *****")
tf.logging.info(" Num examples = %d (%d actual, %d padding)",
len(eval_examples), num_actual_eval_examples,
len(eval_examples) - num_actual_eval_examples)
tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
# This tells the estimator to run through the entire set.
eval_steps = None
# However, if running eval on the TPU, you will need to specify the
# number of steps.
if FLAGS.use_tpu:
assert len(eval_examples) % FLAGS.eval_batch_size == 0
eval_steps = int(len(eval_examples) // FLAGS.eval_batch_size)
eval_drop_remainder = True if FLAGS.use_tpu else False
eval_input_fn = file_based_input_fn_builder(
input_file=eval_file,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=eval_drop_remainder)
result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
with tf.gfile.GFile(output_eval_file, "w") as writer:
tf.logging.info("***** Eval results *****")
for key in sorted(result.keys()):
tf.logging.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
if FLAGS.do_predict:
predict_examples = processor.get_test_examples(FLAGS.data_dir)
num_actual_predict_examples = len(predict_examples)
if FLAGS.use_tpu:
# TPU requires a fixed batch size for all batches, therefore the number
# of examples must be a multiple of the batch size, or else examples
# will get dropped. So we pad with fake examples which are ignored
# later on.
while len(predict_examples) % FLAGS.predict_batch_size != 0:
predict_examples.append(PaddingInputExample())
predict_file = os.path.join(FLAGS.output_dir, "predict.tf_record")
file_based_convert_examples_to_features(predict_examples, label_list,
FLAGS.max_seq_length, tokenizer,
predict_file)
tf.logging.info("***** Running prediction*****")
tf.logging.info(" Num examples = %d (%d actual, %d padding)",
len(predict_examples), num_actual_predict_examples,
len(predict_examples) - num_actual_predict_examples)
tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size)
predict_drop_remainder = True if FLAGS.use_tpu else False
predict_input_fn = file_based_input_fn_builder(
input_file=predict_file,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=predict_drop_remainder)
result = estimator.predict(input_fn=predict_input_fn)
output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
num_written_lines = 0
tf.logging.info("***** Predict results *****")
for (i, prediction) in enumerate(result):
probabilities = prediction["probabilities"]
if i >= num_actual_predict_examples:
break
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
num_written_lines += 1
assert num_written_lines == num_actual_predict_examples
if __name__ == "__main__":
flags.mark_flag_as_required("data_dir")
flags.mark_flag_as_required("task_name")
flags.mark_flag_as_required("vocab_file")
flags.mark_flag_as_required("bert_config_file")
flags.mark_flag_as_required("output_dir")
tf.app.run()
|
https://github.com/google-research/bert
|
run_classifier_with_tfhub.py
|
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BERT finetuning runner with TF-Hub."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import optimization
import run_classifier
import tokenization
import tensorflow as tf
import tensorflow_hub as hub
flags = tf.flags
FLAGS = flags.FLAGS
flags.DEFINE_string(
"bert_hub_module_handle", None,
"Handle for the BERT TF-Hub module.")
def create_model(is_training, input_ids, input_mask, segment_ids, labels,
num_labels, bert_hub_module_handle):
"""Creates a classification model."""
tags = set()
if is_training:
tags.add("train")
bert_module = hub.Module(bert_hub_module_handle, tags=tags, trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# In the demo, we are doing a simple classification task on the entire
# segment.
#
# If you want to use the token-level output, use
# bert_outputs["sequence_output"] instead.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
if is_training:
# I.e., 0.1 dropout
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
probabilities = tf.nn.softmax(logits, axis=-1)
log_probs = tf.nn.log_softmax(logits, axis=-1)
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, per_example_loss, logits, probabilities)
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps, use_tpu, bert_hub_module_handle):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
tf.logging.info("*** Features ***")
for name in sorted(features.keys()):
tf.logging.info(" name = %s, shape = %s" % (name, features[name].shape))
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_training = (mode == tf.estimator.ModeKeys.TRAIN)
(total_loss, per_example_loss, logits, probabilities) = create_model(
is_training, input_ids, input_mask, segment_ids, label_ids, num_labels,
bert_hub_module_handle)
output_spec = None
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu)
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
def metric_fn(per_example_loss, label_ids, logits):
predictions = tf.argmax(logits, axis=-1, output_type=tf.int32)
accuracy = tf.metrics.accuracy(label_ids, predictions)
loss = tf.metrics.mean(per_example_loss)
return {
"eval_accuracy": accuracy,
"eval_loss": loss,
}
eval_metrics = (metric_fn, [per_example_loss, label_ids, logits])
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode,
loss=total_loss,
eval_metrics=eval_metrics)
elif mode == tf.estimator.ModeKeys.PREDICT:
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions={"probabilities": probabilities})
else:
raise ValueError(
"Only TRAIN, EVAL and PREDICT modes are supported: %s" % (mode))
return output_spec
return model_fn
def create_tokenizer_from_hub_module(bert_hub_module_handle):
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(bert_hub_module_handle)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
processors = {
"cola": run_classifier.ColaProcessor,
"mnli": run_classifier.MnliProcessor,
"mrpc": run_classifier.MrpcProcessor,
}
if not FLAGS.do_train and not FLAGS.do_eval:
raise ValueError("At least one of `do_train` or `do_eval` must be True.")
tf.gfile.MakeDirs(FLAGS.output_dir)
task_name = FLAGS.task_name.lower()
if task_name not in processors:
raise ValueError("Task not found: %s" % (task_name))
processor = processors[task_name]()
label_list = processor.get_labels()
tokenizer = create_tokenizer_from_hub_module(FLAGS.bert_hub_module_handle)
tpu_cluster_resolver = None
if FLAGS.use_tpu and FLAGS.tpu_name:
tpu_cluster_resolver = tf.contrib.cluster_resolver.TPUClusterResolver(
FLAGS.tpu_name, zone=FLAGS.tpu_zone, project=FLAGS.gcp_project)
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
cluster=tpu_cluster_resolver,
master=FLAGS.master,
model_dir=FLAGS.output_dir,
save_checkpoints_steps=FLAGS.save_checkpoints_steps,
tpu_config=tf.contrib.tpu.TPUConfig(
iterations_per_loop=FLAGS.iterations_per_loop,
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
train_examples = None
num_train_steps = None
num_warmup_steps = None
if FLAGS.do_train:
train_examples = processor.get_train_examples(FLAGS.data_dir)
num_train_steps = int(
len(train_examples) / FLAGS.train_batch_size * FLAGS.num_train_epochs)
num_warmup_steps = int(num_train_steps * FLAGS.warmup_proportion)
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=FLAGS.learning_rate,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
use_tpu=FLAGS.use_tpu,
bert_hub_module_handle=FLAGS.bert_hub_module_handle)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
train_batch_size=FLAGS.train_batch_size,
eval_batch_size=FLAGS.eval_batch_size,
predict_batch_size=FLAGS.predict_batch_size)
if FLAGS.do_train:
train_features = run_classifier.convert_examples_to_features(
train_examples, label_list, FLAGS.max_seq_length, tokenizer)
tf.logging.info("***** Running training *****")
tf.logging.info(" Num examples = %d", len(train_examples))
tf.logging.info(" Batch size = %d", FLAGS.train_batch_size)
tf.logging.info(" Num steps = %d", num_train_steps)
train_input_fn = run_classifier.input_fn_builder(
features=train_features,
seq_length=FLAGS.max_seq_length,
is_training=True,
drop_remainder=True)
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
if FLAGS.do_eval:
eval_examples = processor.get_dev_examples(FLAGS.data_dir)
eval_features = run_classifier.convert_examples_to_features(
eval_examples, label_list, FLAGS.max_seq_length, tokenizer)
tf.logging.info("***** Running evaluation *****")
tf.logging.info(" Num examples = %d", len(eval_examples))
tf.logging.info(" Batch size = %d", FLAGS.eval_batch_size)
# This tells the estimator to run through the entire set.
eval_steps = None
# However, if running eval on the TPU, you will need to specify the
# number of steps.
if FLAGS.use_tpu:
# Eval will be slightly WRONG on the TPU because it will truncate
# the last batch.
eval_steps = int(len(eval_examples) / FLAGS.eval_batch_size)
eval_drop_remainder = True if FLAGS.use_tpu else False
eval_input_fn = run_classifier.input_fn_builder(
features=eval_features,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=eval_drop_remainder)
result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)
output_eval_file = os.path.join(FLAGS.output_dir, "eval_results.txt")
with tf.gfile.GFile(output_eval_file, "w") as writer:
tf.logging.info("***** Eval results *****")
for key in sorted(result.keys()):
tf.logging.info(" %s = %s", key, str(result[key]))
writer.write("%s = %s\n" % (key, str(result[key])))
if FLAGS.do_predict:
predict_examples = processor.get_test_examples(FLAGS.data_dir)
if FLAGS.use_tpu:
# Discard batch remainder if running on TPU
n = len(predict_examples)
predict_examples = predict_examples[:(n - n % FLAGS.predict_batch_size)]
predict_file = os.path.join(FLAGS.output_dir, "predict.tf_record")
run_classifier.file_based_convert_examples_to_features(
predict_examples, label_list, FLAGS.max_seq_length, tokenizer,
predict_file)
tf.logging.info("***** Running prediction*****")
tf.logging.info(" Num examples = %d", len(predict_examples))
tf.logging.info(" Batch size = %d", FLAGS.predict_batch_size)
predict_input_fn = run_classifier.file_based_input_fn_builder(
input_file=predict_file,
seq_length=FLAGS.max_seq_length,
is_training=False,
drop_remainder=FLAGS.use_tpu)
result = estimator.predict(input_fn=predict_input_fn)
output_predict_file = os.path.join(FLAGS.output_dir, "test_results.tsv")
with tf.gfile.GFile(output_predict_file, "w") as writer:
tf.logging.info("***** Predict results *****")
for prediction in result:
probabilities = prediction["probabilities"]
output_line = "\t".join(
str(class_probability)
for class_probability in probabilities) + "\n"
writer.write(output_line)
if __name__ == "__main__":
flags.mark_flag_as_required("data_dir")
flags.mark_flag_as_required("task_name")
flags.mark_flag_as_required("bert_hub_module_handle")
flags.mark_flag_as_required("output_dir")
tf.app.run()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.