This section relates to environment variables that impact Numba’s runtime, for compile time environment variables see Build time environment variables and configuration of optional components.
Numba allows its behaviour to be changed through the use of environment variables. Unless otherwise mentioned, those variables have integer values and default to zero.
For convenience, Numba also supports the use of a configuration file to persist
configuration settings. Note: To use this feature
pyyaml must be installed.
The configuration file must be named
.numba_config.yaml and be present in
the directory from which the Python interpreter is invoked. The configuration
file, if present, is read for configuration settings before the environment
variables are searched. This means that the environment variable settings will
override the settings obtained from a configuration file (the configuration file
is for setting permanent preferences whereas the environment variables are for
The format of the configuration file is a dictionary in
YAML format that
maps the environment variables below (without the
NUMBA_ prefix) to a
desired value. For example, to permanently switch on developer mode
NUMBA_DEVELOPER_MODE environment variable) and control flow graph printing
NUMBA_DUMP_CFG environment variable), create a configuration file with the
developer_mode: 1 dump_cfg: 1
This can be especially useful in the case of wanting to use a set color scheme
based on terminal background color. For example, if the terminal background
color is black, the
dark_bg color scheme would be well suited and can be set
for permanent use by adding:
These variables globally override flags to the
If set to 0 or 1, globally disable or enable bounds checking, respectively. The default if the variable is not set or set to an empty string is to use the
boundscheckflag passed to the
jit()decorator for a given function. See the documentation of @jit for more information.
Note, due to limitations in numba, the bounds checking currently produces exception messages that do not match those from NumPy. If you set
NUMBA_FULL_TRACEBACKS=1, the full exception message with the axis, index, and shape information will be printed to the terminal.
These variables influence what is printed out during compilation of JIT functions.
If set to non-zero, developer mode produces full tracebacks and disables help instructions. Default is zero.
If set to non-zero, enable full tracebacks when an exception occurs. Defaults to the value set by NUMBA_DEVELOPER_MODE.
If set to non-zero, show resources for getting help. Default is zero.
If set to non-zero error message highlighting is disabled. This is useful for running the test suite on CI systems.
Alters the color scheme used in error reporting (requires the
coloramapackage to be installed to work). Valid values are:
no_colorNo color added, just bold font weighting.
dark_bgSuitable for terminals with a dark background.
light_bgSuitable for terminals with a light background.
blue_bgSuitable for terminals with a blue background.
jupyter_nbSuitable for use in Jupyter Notebooks.
no_color. The type of the value is
If set to non-zero and
pygmentsis installed, syntax highlighting is applied to Numba IR, LLVM IR and assembly dumps. Default is zero.
If set to non-zero the issuing of performance warnings is disabled. Default is zero.
If set to non-zero, print out all possible debugging information during function compilation. Finer-grained control can be obtained using other variables below.
If set to non-zero, print out debugging information during operation of the compiler frontend, up to and including generation of the Numba Intermediate Representation.
If set to non-zero, enable debug for the full application by setting the default value of the
jit. Beware that enabling debug info significantly increases the memory consumption for each compiled function. Default value equals to the value of NUMBA_ENABLE_PROFILING.
gdbbinary for use in Numba’s
gdbsupport, this takes the form of a path and full name of the binary, for example:
/path/from/root/to/binary/name_of_gdb_binaryThis is to permit the use of a
gdbfrom a non-default location with a non-default name. If not set
gdbis assumed to reside at
If set to non-zero, print out debugging information about type inference.
Enables JIT events of LLVM in order to support profiling of jitted functions. This option is automatically enabled under certain profilers.
If set to non-zero, trace certain function calls (function entry and exit events, including arguments and return values).
If set to non-zero, print out information about the Control Flow Graph of compiled functions.
If set to non-zero, print out the Numba Intermediate Representation of compiled functions.
If set to non-zero, print out the Numba Intermediate Representation of compiled functions after conversion to Static Single Assignment (SSA) form.
Dump the Numba IR after declared pass(es). This is useful for debugging IR changes made by given passes. Accepted values are:
- Any pass name (as given by the
.name()method on the class)
- Multiple pass names as a comma separated list, i.e.
- The token
"all", which will print after all passes.
The default value is
"none"so as to prevent output.
- Any pass name (as given by the
If set to non-zero, print out types annotations for compiled functions.
Dump the unoptimized LLVM assembly source of compiled functions. Unoptimized code is usually very verbose; therefore,
NUMBA_DUMP_OPTIMIZEDis recommended instead.
Dump the LLVM assembly source after the LLVM “function optimization” pass, but before the “module optimization” pass. This is useful mostly when developing Numba itself, otherwise use
Dump the LLVM assembly source of compiled functions after all optimization passes. The output includes the raw function as well as its CPython-compatible wrapper (whose name begins with
wrapper.). Note that the function is often inlined inside the wrapper, as well.
Dump debugging information related to the processing associated with the
parallel=Truejit decorator option.
Dump debugging information related to the runtime scheduler associated with the
parallel=Truejit decorator option.
Dump statistics about how many operators/calls are converted to parallel for-loops and how many are fused together, which are associated with the
parallel=Truejit decorator option.
If set to an integer value between 1 and 4 (inclusive) diagnostic information about parallel transforms undertaken by Numba will be written to STDOUT. The higher the value set the more detailed the information produced.
Dump the native assembly code of compiled functions.
The optimization level; this option is passed straight to LLVM.
Default value: 3
If set to non-zero, enable LLVM loop vectorization.
Default value: 1 (except on 32-bit Windows)
If set to non-zero, enable AVX optimizations in LLVM. This is disabled by default on Sandy Bridge and Ivy Bridge architectures as it can sometimes result in slower code on those platforms.
If set to non-zero and Intel SVML is available, the use of SVML will be disabled.
If set to non-zero, compilation of JIT functions will never entirely fail, but instead generate a fallback that simply interprets the function. This is only to be used if you are migrating a large codebase from an old Numba version (before 0.12), and want to avoid breaking everything at once. Otherwise, please don’t use this.
Disable JIT compilation entirely. The
jit()decorator acts as if it performs no operation, and the invocation of decorated functions calls the original Python function instead of a compiled version. This can be useful if you want to run the Python debugger over your code.
Override CPU and CPU features detection. By setting
NUMBA_CPU_NAME=generic, a generic CPU model is picked for the CPU architecture and the feature list (
NUMBA_CPU_FEATURES) defaults to empty. CPU features must be listed with the format
+indicates enable and
-indicates disable. For example,
+sse,+sse2,-avx,-avx2enables SSE and SSE2, and disables AVX and AVX2.
These settings are passed to LLVM for configuring the compilation target. To get a list of available options, use the
llccommandline tool from LLVM, for example:
llc -march=x86 -mattr=help
To force all caching functions (
@jit(cache=True)) to emit portable code (portable within the same architecture and OS), simply set
Override the size of the function cache for retaining recently deserialized functions in memory. In systems like Dask, it is common for functions to be deserialized multiple times. Numba will cache functions as long as there is a reference somewhere in the interpreter. This cache size variable controls how many functions that are no longer referenced will also be retained, just in case they show up in the future. The implementation of this is not a true LRU, but the large size of the cache should be sufficient for most situations.
Note: this is unrelated to the compilation cache.
Default value: 128
Options for the compilation cache.
If set to non-zero, print out information about operation of the JIT compilation cache.
Override the location of the cache directory. If defined, this should be a valid directory path.
If not defined, Numba picks the cache directory in the following order:
- In-tree cache. Put the cache next to the corresponding source file under
__pycache__directory following how
.pycfiles are stored.
- User-wide cache. Put the cache in the user’s application directory using
appdirs.user_cache_dirfrom the Appdirs package.
- IPython cache. Put the cache in an IPython specific application
Stores are made under the
numba_cachein the directory returned by
- In-tree cache. Put the cache next to the corresponding source file under a
If set to non-zero, disable CUDA support.
If set, force the CUDA compute capability to the given version (a string of the type
major.minor), regardless of attached devices.
The default compute capability (a string of the type
major.minor) to target when compiling to PTX using
cuda.compile_ptx. The default is 5.2, which is the lowest non-deprecated compute capability in the most recent version of the CUDA toolkit supported (10.2 at present).
If set, don’t compile and execute code for the GPU, but use the CUDA Simulator instead. For debugging purposes.
If set, the number of threads in the thread pool for the parallel CPU target will take this value. Must be greater than zero. This value is independent of
Default value: The number of CPU cores on the system as determined at run time. This can be accessed via
See also the section on Setting the Number of Threads for information on how to set the number of threads at runtime.
This environment variable controls the library used for concurrent execution for the CPU parallel targets (
@njit(parallel=True)). The variable type is string and by default is
defaultwhich will select a threading layer based on what is available in the runtime. The valid values are (for more information about these see the threading layer documentation):
default- select a threading layer based on what is available in the current runtime.
safe- select a threading layer that is both fork and thread safe (requires the TBB package).
forksafe- select a threading layer that is fork safe.
threadsafe- select a threading layer that is thread safe.
tbb- A threading layer backed by Intel TBB.
omp- A threading layer backed by OpenMP.
workqueue- A simple built-in work-sharing task scheduler.