commit 124a69c87b5df26221f4771ff76a0c742a971085 parent dc5676f8d60ec08e7784568206571659b4f72a4b Author: Steven Atkinson <steven@atkinson.mn> Date: Mon, 26 Feb 2024 23:46:10 -0800 [DOCUMENTATION] Initial ReadTheDocs (#382) * Copy files from https://github.com/readthedocs/tutorial-template.git * Update docs * Colab tutorial, API * Add nam requirement * Docs: Fix nam requirement * GUI and CL trainer tutorials * Update README.md Move instructions to ReadTheDocs! * A few fixes * Update LICENSE Update year Diffstat:
26 files changed, 413 insertions(+), 145 deletions(-)
diff --git a/.readthedocs.yaml b/.readthedocs.yaml @@ -0,0 +1,13 @@ +version: "2" + +build: + os: "ubuntu-22.04" + tools: + python: "3.10" + +python: + install: + - requirements: docs/requirements.txt + +sphinx: + configuration: docs/source/conf.py diff --git a/LICENSE b/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2022 Steven Atkinson +Copyright (c) 2024 Steven Atkinson Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md @@ -4,146 +4,4 @@ This repository handles training, reamping, and exporting the weights of a model For playing trained models in real time in a standalone application or plugin, see the partner repo, [NeuralAmpModelerPlugin](https://github.com/sdatkinson/NeuralAmpModelerPlugin). -* [How to use](https://github.com/sdatkinson/neural-amp-modeler/tree/main#how-to-use) - * [Google Colab](https://github.com/sdatkinson/neural-amp-modeler/tree/main#google-colab) - * [GUI](https://github.com/sdatkinson/neural-amp-modeler/tree/main#gui) - * [The command line trainer (all features)](https://github.com/sdatkinson/neural-amp-modeler/tree/main#the-command-line-trainer-all-features) -* [Standardized reamping files](https://github.com/sdatkinson/neural-amp-modeler/tree/main#standardized-reamping-files) -* [Other utilities](https://github.com/sdatkinson/neural-amp-modeler/tree/main#other-utilities) - -## How to use -There are three main ways to use the NAM trainer. There are two simplified trainers available (1) in your browser via Google Colab and (2) Locally via a GUI. There is also a full-featured trainer for power users than can be run from the command line. - -### Google Colab - -If you don't have a good computer for training ML models, you use Google Colab to train -in the cloud using the pre-made notebooks under `bin\train`. - -For the very easiest experience, open -[`easy_colab.ipynb` on Google Colab](https://colab.research.google.com/github/sdatkinson/neural-amp-modeler/blob/27c6a048025e7894e0d89579cfda6c59d93e0f20/bin/train/easy_colab.ipynb) -and follow the steps! - -### GUI - -After installing the Python package, a GUI can be accessed by running `nam` in the command line. - -### The command line trainer (all features) - -Alternatively, you can clone this repo to your computer and use it locally. - -#### Installation - -Installation uses [Anaconda](https://www.anaconda.com/) for package management. - -For computers with a CUDA-capable GPU (recommended): - -```bash -conda env create -f environment_gpu.yml -``` -_Note: you may need to modify the CUDA version if your GPU is older. Have a look at [nVIDIA's documentation](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions) if you're not sure._ - -Otherwise, for a CPU-only install (will train much more slowly): - -```bash -conda env create -f environment_cpu.yml -``` - -_Note: if Anaconda takes a long time "`Solving environment...`", then you can speed up installing the environment by using the mamba experimental sovler with `--experimental-solver=libmamba`._ - -Then activate the environment you've created with - -```bash -conda activate nam -``` - -#### Train models (GUI) -After installing, you can open a GUI trainer by running - -```bash -nam -``` - -from the terminal. - -#### Train models (Python script) -For users looking to get more fine-grained control over the modeling process, -NAM includes a training script that can be run from the terminal. In order to run it -#### Download audio files -Download the [v1_1_1.wav](https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link) and [output.wav](https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link) to a folder of your choice - -##### Update data configuration -Edit `bin/train/data/single_pair.json` to point to relevant audio files: -```json - "common": { - "x_path": "C:\\path\\to\\v1_1_1.wav", - "y_path": "C:\\path\\to\\output.wav", - "delay": 0 - } -``` - -##### Run training script -Open up a terminal. Activate your nam environment and call the training with -```bash -python bin/train/main.py \ -bin/train/inputs/data/single_pair.json \ -bin/train/inputs/models/demonet.json \ -bin/train/inputs/learning/demo.json \ -bin/train/outputs/MyAmp -``` - -`data/single_pair.json` contains the information about the data you're training -on -`models/demonet.json` contains information about the model architecture that -is being trained. The example used here uses a `feather` configured `wavenet`. -`learning/demo.json` contains information about the training run itself (e.g. number of epochs). - -The configuration above runs a short (demo) training. For a real training you may prefer to run something like, - -```bash -python bin/train/main.py \ -bin/train/inputs/data/single_pair.json \ -bin/train/inputs/models/wavenet.json \ -bin/train/inputs/learning/default.json \ -bin/train/outputs/MyAmp -``` - -As a side note, NAM uses [PyTorch Lightning](https://lightning.ai/pages/open-source/) -under the hood as a modeling framework, and you can control many of the Pytorch Lightning configuration options from `bin/train/inputs/learning/default.json` - -#### Export a model (to use with [the plugin](https://github.com/sdatkinson/NeuralAmpModelerPlugin)) -Exporting the trained model to a `.nam` file for use with the plugin can be done -with: - -```bash -python bin/export.py \ -path/to/config_model.json \ -path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \ -path/to/exported_models/MyAmp -``` - -Then, point the plugin at the exported `model.nam` file and you're good to go! - -## Standardized reamping files - -NAM can train using any paired audio files, but the simplified trainers (Colab and GUI) can use some pre-made audio files for you to reamp through your gear. - -You can use any of the following files: - -* [v3_0_0.wav](https://drive.google.com/file/d/1Pgf8PdE0rKB1TD4TRPKbpNo1ByR3IOm9/view?usp=drive_link) (preferred) -* [v2_0_0.wav](https://drive.google.com/file/d/1xnyJP_IZ7NuyDSTJfn-Jmc5lw0IE7nfu/view?usp=drive_link) -* [v1_1_1.wav](https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link) -* [v1.wav](https://drive.google.com/file/d/1jxwTHOCx3Zf03DggAsuDTcVqsgokNyhm/view?usp=drive_link) - -## Other utilities - -#### Run a model on an input signal ("reamping") - -Handy if you want to just check it out without needing to use the plugin: - -```bash -python bin/run.py \ -path/to/source.wav \ -path/to/config_model.json \ -path/to/checkpoints/epoch=123_val_loss=0.000010.ckpt \ -path/to/output.wav -``` +For documentation, check out the [ReadTheDocs](https://neural-amp-modeler.readthedocs.io). diff --git a/docs/Makefile b/docs/Makefile @@ -0,0 +1,20 @@ +# Minimal makefile for Sphinx documentation +# + +# You can set these variables from the command line, and also +# from the environment for the first two. +SPHINXOPTS ?= +SPHINXBUILD ?= sphinx-build +SOURCEDIR = source +BUILDDIR = build + +# Put it first so that "make" without argument is like "make help". +help: + @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) + +.PHONY: help Makefile + +# Catch-all target: route all unknown targets to Sphinx using the new +# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). +%: Makefile + @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/docs/make.bat b/docs/make.bat @@ -0,0 +1,35 @@ +@ECHO OFF + +pushd %~dp0 + +REM Command file for Sphinx documentation + +if "%SPHINXBUILD%" == "" ( + set SPHINXBUILD=sphinx-build +) +set SOURCEDIR=source +set BUILDDIR=build + +if "%1" == "" goto help + +%SPHINXBUILD% >NUL 2>NUL +if errorlevel 9009 ( + echo. + echo.The 'sphinx-build' command was not found. Make sure you have Sphinx + echo.installed, then set the SPHINXBUILD environment variable to point + echo.to the full path of the 'sphinx-build' executable. Alternatively you + echo.may add the Sphinx directory to PATH. + echo. + echo.If you don't have Sphinx installed, grab it from + echo.http://sphinx-doc.org/ + exit /b 1 +) + +%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% +goto end + +:help +%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% + +:end +popd diff --git a/docs/requirements.txt b/docs/requirements.txt @@ -0,0 +1,3 @@ +sphinx==7.1.2 +sphinx-rtd-theme==1.3.0rc1 +. diff --git a/docs/source/.gitignore b/docs/source/.gitignore @@ -0,0 +1,3 @@ +_build/ +generated/ +html/ diff --git a/docs/source/api.rst b/docs/source/api.rst @@ -0,0 +1,10 @@ +API +=== + +.. autosummary:: + :toctree: generated + + nam.data + nam.models + nam.train + nam.util diff --git a/docs/source/conf.py b/docs/source/conf.py @@ -0,0 +1,40 @@ +# Configuration file for the Sphinx documentation builder. +# +# Build locally +# (e.g. https://readthedocs.org/projects/neural-amp-modeler/builds/23551748/) +# +# $ python -m sphinx -T -b html -d _build/doctrees -D language=en . ./html + +# -- Project information + +project = "neural-amp-modeler" +copyright = "2024 Steven Atkinson" +author = "Steven Atkinson" + +release = "0.8" +version = "0.8.1" + +# -- General configuration + +extensions = [ + "sphinx.ext.duration", + "sphinx.ext.doctest", + "sphinx.ext.autodoc", + "sphinx.ext.autosummary", + "sphinx.ext.intersphinx", +] + +intersphinx_mapping = { + "python": ("https://docs.python.org/3/", None), + "sphinx": ("https://www.sphinx-doc.org/en/master/", None), +} +intersphinx_disabled_domains = ["std"] + +templates_path = ["_templates"] + +# -- Options for HTML output + +html_theme = "sphinx_rtd_theme" + +# -- Options for EPUB output +epub_show_urls = "footnote" diff --git a/docs/source/index.rst b/docs/source/index.rst @@ -0,0 +1,17 @@ +Welcome to ``neural-amp-modeler``'s documentation! +================================================== + +``neural-amp-modeler`` is a Python package for creating neural network models of +your guitar (bass, etc) gear. It works by using two audio files--an input "DI" +file as well as an output "reamp" file, showing how the gear responds to +different incoming signals. + +Contents +-------- + +.. toctree:: + :maxdepth: 1 + + installation + tutorials/main + api diff --git a/docs/source/installation.rst b/docs/source/installation.rst @@ -0,0 +1,18 @@ +Local Installation +================== + +It's recommended to use Anaconda to manage your install. Get Anaconda from +https://www.anaconda.com/download + +If your computer has an nVIDIA GPU, you should install a GPU-compatible version +of PyTorch first: + +.. code-block:: console + + $ conda install -y pytorch pytorch-cuda=11.8 -c pytorch -c nvidia + +Next, install NAM using pip: + +.. code-block:: console + + $ pip install neural-amp-modeler diff --git a/docs/source/tutorials/colab.rst b/docs/source/tutorials/colab.rst @@ -0,0 +1,90 @@ +Training in the cloud with Google Colab +======================================= + +If you don't have a good computer for training ML models, you use Google Colab +to train in the cloud using the pre-made Jupyter notebook at +`bin/train/easy_colab.ipynb <https://github.com/sdatkinson/neural-amp-modeler/blob/main/bin/train/easy_colab.ipynb>`_, +which is designed to be used with +`Google Colab <https://colab.research.google.com/>`_. + +Opening the notebook +-------------------- + +To open the notebook in Colab, follow +`this link <https://colab.research.google.com/github/sdatkinson/neural-amp-modeler/blob/27c6a048025e7894e0d89579cfda6c59d93e0f20/bin/train/easy_colab.ipynb>`_. + +.. note:: Most browsers work, but Firefox can be a bit temperamental. This isn't + NAM's fault; Colab just prefers Chrome (unsurprisingly). + +You'll be met with a screen like this: + +.. image:: media/colab/welcome.png + +Reamping: Getting data for your model +------------------------------------- + +In order to train, you're going to need data, which means you're going to need +an amp or a pedal you want to model, and you're going to need to have gear to +reamp with it. Start by downloading the standardized test signal here: + +.. image:: media/colab/get-input.png + :scale: 20 % + +If you need help with reamping, others +`on YouTube <https://www.youtube.com/results?search_query=reamping+tutorial>`_ +have made high-quality tutorials. + +.. note:: You need to make sure that your exported file is the same length as + the input file. To help with this, the standardized input files are an + exact number of seconds long. If you drop them into a DAW session at 120 + BPM, you can snap your guides to the beat and easily get the reamp of the + right length. + +However, if you want to skip reamping for your first model, you can download +these pre-made files: + +* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_, + a standardized input file. +* `output.wav <https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link>`_, + a reamp of the same overdrive used to make + `ParametricOD <https://www.neuralampmodeler.com/post/the-first-publicly-available-parametric-neural-amp-model>`_. + +To upload your data to Colab, click the Folder icon here: + +.. image:: media/colab/file-icon.png + :scale: 50 % + +and either drag and drop the files into the panel or select them after clicking +the upload button. + +.. image:: media/colab/upload.png + +**Wait for the files to finish uploading before proceeding.** If you don't, then +strange errors will happen. + +Training +-------- + +At this point, you can train your model with a single click: just click the Play +button and everything will finish in about 10 minutes. + +.. image:: media/colab/1-click-train.png + +However, there are a lot of options below that you can use to tweak the training +that are worth getting familiar with. + +TODO: explain the options. + +Downloading your model +---------------------- + +Once training is done, you can download your model as a .nam file from the file +browser: + +.. image:: media/colab/download.png + :scale: 20 % + +If you don't see it, you might have to refresh the file browser: + +.. image:: media/colab/refresh.png + :scale: 20 % diff --git a/docs/source/tutorials/command-line.rst b/docs/source/tutorials/command-line.rst @@ -0,0 +1,110 @@ +Training locally from the command line +====================================== + +The command line trainer is the full-featured option for training models with +NAM. + +Installation +------------ + +Currently, you'll want to clone the source repo to train from the command line. + +Installation uses `Anaconda <https://www.anaconda.com/>`_ for package management. + +For computers with a CUDA-capable GPU (recommended): + +.. code-block:: console + + conda env create -f environment_gpu.yml + +.. note:: You may need to modify the CUDA version if your GPU is older. Have a + look at + `nVIDIA's documentation <https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions>`_ + if you're not sure. + +Otherwise, for a CPU-only install (will train much more slowly): + +.. code-block:: console + + conda env create -f environment_cpu.yml + +.. note:: If Anaconda takes a long time "`Solving environment...`", then you can + speed up installing the environment by using the mamba experimental sovler + with ``--experimental-solver=libmamba``. + +Then activate the environment you've created with + +.. code-block:: console + + conda activate nam + +Training +-------- + +Since the command-line trainer is intended for maximum flexibiility, you can +train from any input/output pair of reamp files you want. However, if you want +to skip the reamping and use some pre-made files for your first time, you can +download these files: + +* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_ + (input) +* `output.wav <https://drive.google.com/file/d/1e0pDzsWgtqBU87NGqa-4FbriDCkccg3q/view?usp=drive_link>`_ + (output) + +Next, edit ``bin/train/data/single_pair.json`` to point to relevant audio files: + +.. code-block:: json + + "common": { + "x_path": "C:\\path\\to\\v1_1_1.wav", + "y_path": "C:\\path\\to\\output.wav", + "delay": 0 + } + +.. note:: If you're provideding your own audio files, then you need to provide + the latency (in samples) between the input and output file. A positive + number of samples means that the output lags the input by the provided + number of samples; a negative value means that the output `precedes` the + input (e.g. because your DAW over-compensated). If you're not sure exactly + how much latency there is, it's usually a good idea to add a few samples + just so that the model doesn't need to predict the future! + +Next, to train, open up a terminal. Activate your nam environment and call the +training with + +.. code-block:: console + + python bin/train/main.py \ + bin/train/inputs/data/single_pair.json \ + bin/train/inputs/models/demonet.json \ + bin/train/inputs/learning/demo.json \ + bin/train/outputs/MyAmp + +* ``data/single_pair.json`` contains the information about the data you're + training on. +* ``models/demonet.json`` contains information about the model architecture that + is being trained. The example used here uses a `feather` configured `wavenet`. +* ``learning/demo.json`` contains information about the training run itself + (e.g. number of epochs). + +The configuration above runs a short (demo) training. For a real training you +may prefer to run something like: + +.. code-block:: console + + python bin/train/main.py \ + bin/train/inputs/data/single_pair.json \ + bin/train/inputs/models/wavenet.json \ + bin/train/inputs/learning/default.json \ + bin/train/outputs/MyAmp + +.. note:: NAM uses + `PyTorch Lightning <https://lightning.ai/pages/open-source/>`_ + under the hood as a modeling framework, and you can control many of the + PyTorch Lightning configuration options from + ``bin/train/inputs/learning/default.json``. + +Once training is done, a file called ``model.nam`` is created in the output +directory. To use it, point +`the plugin <https://github.com/sdatkinson/NeuralAmpModelerPlugin>`_ at the file +and you're good to go! diff --git a/docs/source/tutorials/gui.rst b/docs/source/tutorials/gui.rst @@ -0,0 +1,18 @@ +Training locally with the GUI +============================= + +After installing NAM locally, you can launch the GUI trainer from a terminal +with: + +.. code-block:: console + + $ nam + +Training with the GUI requires a reamp based on one of the standardized training +files: + +* `v3_0_0.wav <https://drive.google.com/file/d/1Pgf8PdE0rKB1TD4TRPKbpNo1ByR3IOm9/view?usp=drive_link>`_ + (preferred) +* `v2_0_0.wav <https://drive.google.com/file/d/1xnyJP_IZ7NuyDSTJfn-Jmc5lw0IE7nfu/view?usp=drive_link>`_ +* `v1_1_1.wav <https://drive.google.com/file/d/1CMj2uv_x8GIs-3X1reo7squHOVfkOa6s/view?usp=drive_link>`_ +* `v1.wav <https://drive.google.com/file/d/1jxwTHOCx3Zf03DggAsuDTcVqsgokNyhm/view?usp=drive_link>`_ diff --git a/docs/source/tutorials/main.rst b/docs/source/tutorials/main.rst @@ -0,0 +1,9 @@ +Tutorials +========= + +.. toctree:: + :maxdepth: 1 + + colab + gui + command-line +\ No newline at end of file diff --git a/docs/source/tutorials/media/colab/1-click-train.png b/docs/source/tutorials/media/colab/1-click-train.png Binary files differ. diff --git a/docs/source/tutorials/media/colab/download.png b/docs/source/tutorials/media/colab/download.png Binary files differ. diff --git a/docs/source/tutorials/media/colab/file-icon.png b/docs/source/tutorials/media/colab/file-icon.png Binary files differ. diff --git a/docs/source/tutorials/media/colab/get-input.png b/docs/source/tutorials/media/colab/get-input.png Binary files differ. diff --git a/docs/source/tutorials/media/colab/refresh.png b/docs/source/tutorials/media/colab/refresh.png Binary files differ. diff --git a/docs/source/tutorials/media/colab/upload.png b/docs/source/tutorials/media/colab/upload.png Binary files differ. diff --git a/docs/source/tutorials/media/colab/welcome.png b/docs/source/tutorials/media/colab/welcome.png Binary files differ. diff --git a/nam/__init__.py b/nam/__init__.py @@ -2,6 +2,7 @@ # File Created: Tuesday, 2nd February 2021 9:42:50 pm # Author: Steven Atkinson (steven@atkinson.mn) + # Hack to recover graceful shutdowns in Windows. # This has to happen ASAP # See: diff --git a/nam/data.py b/nam/data.py @@ -2,6 +2,10 @@ # Created Date: Saturday February 5th 2022 # Author: Steven Atkinson (steven@atkinson.mn) +""" +Functions and classes for working with audio data with NAM +""" + import abc import logging from collections import namedtuple @@ -610,7 +614,11 @@ class Dataset(AbstractDataset, InitializableFromConfig): @classmethod def _validate_preceding_silence( - cls, x: torch.Tensor, start: Optional[int], silent_seconds: float, sample_rate: Optional[float] + cls, + x: torch.Tensor, + start: Optional[int], + silent_seconds: float, + sample_rate: Optional[float], ): """ Make sure that the input is silent before the starting index. diff --git a/nam/models/__init__.py b/nam/models/__init__.py @@ -2,6 +2,10 @@ # Created Date: Saturday February 5th 2022 # Author: Steven Atkinson (steven@atkinson.mn) +""" +NAM's neural networks +""" + from . import _base # noqa F401 from . import _exportable # noqa F401 from . import losses # noqa F401 diff --git a/nam/train/__init__.py b/nam/train/__init__.py @@ -1,3 +1,13 @@ # File: __init__.py # Created Date: Sunday December 4th 2022 # Author: Steven Atkinson (steven@atkinson.mn) + +""" +Code for standardized training with NAM +""" + +__all__ = ["colab", "core", "gui"] + +from . import colab +from . import core +from . import gui