107、Theano 配置和编译模式
本文最后更新于 258 天前,其中的信息可能已经过时,如有错误请发送邮件到wuxianglongblog@163.com

Theano 配置和编译模式

配置

之前我们已经知道, theano 的配置可以用 config 模块来查看:

import theano
import theano.tensor as T

print theano.config
floatX (('float64', 'float32', 'float16')) 
    Doc:  Default floating-point precision for python casts.

Note: float16 support is experimental, use at your own risk.
    Value:  float32

warn_float64 (('ignore', 'warn', 'raise', 'pdb')) 
    Doc:  Do an action when a tensor variable with float64 dtype is created. They can't be run on the GPU with the current(old) gpu back-end and are slow with gamer GPUs.
    Value:  ignore

cast_policy (('custom', 'numpy+floatX')) 
    Doc:  Rules for implicit type casting
    Value:  custom

int_division (('int', 'raise', 'floatX')) 
    Doc:  What to do when one computes x / y, where both x and y are of integer types
    Value:  int

device (cpu, gpu*, opencl*, cuda*) 
    Doc:  Default device for computations. If gpu*, change the default to try to move computation to it and to put shared variable of float32 on it. Do not use upper case letters, only lower case even if NVIDIA use capital letters.
    Value:  gpu1

init_gpu_device (, gpu*, opencl*, cuda*) 
    Doc:  Initialize the gpu device to use, works only if device=cpu. Unlike 'device', setting this option will NOT move computations, nor shared variables, to the specified GPU. It can be used to run GPU-specific tests on a particular GPU.
    Value:  

force_device () 
    Doc:  Raise an error if we can't use the specified device
    Value:  False


    Doc:  
    Context map for multi-gpu operation. Format is a
    semicolon-separated list of names and device names in the
    'name->dev_name' format. An example that would map name 'test' to
    device 'cuda0' and name 'test2' to device 'opencl0:0' follows:
    "test->cuda0;test2->opencl0:0".

    Invalid context names are 'cpu', 'cuda*' and 'opencl*'

    Value:  

print_active_device () 
    Doc:  Print active device at when the GPU device is initialized.
    Value:  True

enable_initial_driver_test () 
    Doc:  Tests the nvidia driver when a GPU device is initialized.
    Value:  True

cuda.root () 
    Doc:  directory with bin/, lib/, include/ for cuda utilities.
       This directory is included via -L and -rpath when linking
       dynamically compiled modules.  If AUTO and nvcc is in the
       path, it will use one of nvcc parent directory.  Otherwise
       /usr/local/cuda will be used.  Leave empty to prevent extra
       linker directives.  Default: environment variable "CUDA_ROOT"
       or else "AUTO".

    Value:  /usr/local/cuda-7.0


    Doc:  Extra compiler flags for nvcc
    Value:  

nvcc.compiler_bindir () 
    Doc:  If defined, nvcc compiler driver will seek g++ and gcc in this directory
    Value:  

nvcc.fastmath () 
    Doc:  
    Value:  False

gpuarray.sync () 
    Doc:  If True, every op will make sure its work is done before
                returning.  Setting this to True will slow down execution,
                but give much more accurate results in profiling.
    Value:  False

gpuarray.preallocate () 
    Doc:  If 0 it doesn't do anything.  If between 0 and 1 it
             will preallocate that fraction of the total GPU memory.
             If 1 or greater it will preallocate that amount of memory
             (in megabytes).
    Value:  0.0


    Doc:  This flag is deprecated; use dnn.conv.algo_fwd.
    Value:  True


    Doc:  This flag is deprecated; use dnn.conv.algo_bwd.
    Value:  True


    Doc:  This flag is deprecated; use dnn.conv.algo_bwd_data and dnn.conv.algo_bwd_filter.
    Value:  True

dnn.conv.algo_fwd (('small', 'none', 'large', 'fft', 'fft_tiling', 'guess_once', 'guess_on_shape_change', 'time_once', 'time_on_shape_change')) 
    Doc:  Default implementation to use for CuDNN forward convolution.
    Value:  small

dnn.conv.algo_bwd_data (('none', 'deterministic', 'fft', 'fft_tiling', 'guess_once', 'guess_on_shape_change', 'time_once', 'time_on_shape_change')) 
    Doc:  Default implementation to use for CuDNN backward convolution to get the gradients of the convolution with regard to the inputs.
    Value:  none

dnn.conv.algo_bwd_filter (('none', 'deterministic', 'fft', 'small', 'guess_once', 'guess_on_shape_change', 'time_once', 'time_on_shape_change')) 
    Doc:  Default implementation to use for CuDNN backward convolution to get the gradients of the convolution with regard to the filters.
    Value:  none

dnn.conv.precision (('as_input', 'float16', 'float32', 'float64')) 
    Doc:  Default data precision to use for the computation in CuDNN convolutions (defaults to the same dtype as the inputs of the convolutions).
    Value:  as_input

dnn.include_path () 
    Doc:  Location of the cudnn header (defaults to the cuda root)
    Value:  /usr/local/cuda-7.0/include

dnn.library_path () 
    Doc:  Location of the cudnn header (defaults to the cuda root)
    Value:  /usr/local/cuda-7.0/lib64

assert_no_cpu_op (('ignore', 'warn', 'raise', 'pdb')) 
    Doc:  Raise an error/warning if there is a CPU op in the computational graph.
    Value:  ignore

mode (('Mode', 'ProfileMode', 'DebugMode', 'FAST_RUN', 'NanGuardMode', 'FAST_COMPILE', 'PROFILE_MODE', 'DEBUG_MODE')) 
    Doc:  Default compilation mode
    Value:  Mode

cxx () 
    Doc:  The C++ compiler to use. Currently only g++ is supported, but supporting additional compilers should not be too difficult. If it is empty, no C++ code is compiled.
    Value:  /usr/bin/g++

linker (('cvm', 'c|py', 'py', 'c', 'c|py_nogc', 'vm', 'vm_nogc', 'cvm_nogc')) 
    Doc:  Default linker used if the theano flags mode is Mode or ProfileMode(deprecated)
    Value:  cvm

allow_gc () 
    Doc:  Do we default to delete intermediate results during Theano function calls? Doing so lowers the memory requirement, but asks that we reallocate memory at the next function call. This is implemented for the default linker, but may not work for all linkers.
    Value:  True

optimizer (('fast_run', 'merge', 'fast_compile', 'None')) 
    Doc:  Default optimizer. If not None, will use this linker with the Mode object (not ProfileMode(deprecated) or DebugMode)
    Value:  fast_run

optimizer_verbose () 
    Doc:  If True, we print all optimization being applied
    Value:  False

on_opt_error (('warn', 'raise', 'pdb', 'ignore')) 
    Doc:  What to do when an optimization crashes: warn and skip it, raise the exception, or fall into the pdb debugger.
    Value:  warn


    Doc:  This config option was removed in 0.5: do not use it!
    Value:  True

nocleanup () 
    Doc:  Suppress the deletion of code files that did not compile cleanly
    Value:  False

on_unused_input (('raise', 'warn', 'ignore')) 
    Doc:  What to do if a variable in the 'inputs' list of  theano.function() is not used in the graph.
    Value:  raise

tensor.cmp_sloppy () 
    Doc:  Relax tensor._allclose (0) not at all, (1) a bit, (2) more
    Value:  0

tensor.local_elemwise_fusion () 
    Doc:  Enable or not in fast_run mode(fast_run optimization) the elemwise fusion optimization
    Value:  True

gpu.local_elemwise_fusion () 
    Doc:  Enable or not in fast_run mode(fast_run optimization) the gpu elemwise fusion optimization
    Value:  True

lib.amdlibm () 
    Doc:  Use amd's amdlibm numerical library
    Value:  False

gpuelemwise.sync () 
    Doc:  when true, wait that the gpu fct finished and check it error code.
    Value:  True

traceback.limit () 
    Doc:  The number of stack to trace. -1 mean all.
    Value:  8

experimental.mrg () 
    Doc:  Another random number generator that work on the gpu
    Value:  False

experimental.unpickle_gpu_on_cpu () 
    Doc:  Allow unpickling of pickled CudaNdarrays as numpy.ndarrays.This is useful, if you want to open a CudaNdarray without having cuda installed.If you have cuda installed, this will force unpickling tobe done on the cpu to numpy.ndarray.Please be aware that this may get you access to the data,however, trying to unpicke gpu functions will not succeed.This flag is experimental and may be removed any time, whengpu<>cpu transparency is solved.
    Value:  False

numpy.seterr_all (('ignore', 'warn', 'raise', 'call', 'print', 'log', 'None')) 
    Doc:  ("Sets numpy's behaviour for floating-point errors, ", "see numpy.seterr. 'None' means not to change numpy's default, which can be different for different numpy releases. This flag sets the default behaviour for all kinds of floating-point errors, its effect can be overriden for specific errors by the following flags: seterr_divide, seterr_over, seterr_under and seterr_invalid.")
    Value:  ignore

numpy.seterr_divide (('None', 'ignore', 'warn', 'raise', 'call', 'print', 'log')) 
    Doc:  Sets numpy's behavior for division by zero, see numpy.seterr. 'None' means using the default, defined by numpy.seterr_all.
    Value:  None

numpy.seterr_over (('None', 'ignore', 'warn', 'raise', 'call', 'print', 'log')) 
    Doc:  Sets numpy's behavior for floating-point overflow, see numpy.seterr. 'None' means using the default, defined by numpy.seterr_all.
    Value:  None

numpy.seterr_under (('None', 'ignore', 'warn', 'raise', 'call', 'print', 'log')) 
    Doc:  Sets numpy's behavior for floating-point underflow, see numpy.seterr. 'None' means using the default, defined by numpy.seterr_all.
    Value:  None

numpy.seterr_invalid (('None', 'ignore', 'warn', 'raise', 'call', 'print', 'log')) 
    Doc:  Sets numpy's behavior for invalid floating-point operation, see numpy.seterr. 'None' means using the default, defined by numpy.seterr_all.
    Value:  None

warn.ignore_bug_before (('0.6', 'None', 'all', '0.3', '0.4', '0.4.1', '0.5', '0.7')) 
    Doc:  If 'None', we warn about all Theano bugs found by default. If 'all', we don't warn about Theano bugs found by default. If a version, we print only the warnings relative to Theano bugs found after that version. Warning for specific bugs can be configured with specific [warn] flags.
    Value:  0.6

warn.argmax_pushdown_bug () 
    Doc:  Warn if in past version of Theano we generated a bug with the theano.tensor.nnet.nnet.local_argmax_pushdown optimization. Was fixed 27 may 2010
    Value:  False

warn.gpusum_01_011_0111_bug () 
    Doc:  Warn if we are in a case where old version of Theano had a silent bug with GpuSum pattern 01,011 and 0111 when the first dimensions was bigger then 4096. Was fixed 31 may 2010
    Value:  False

warn.sum_sum_bug () 
    Doc:  Warn if we are in a case where Theano version between version 9923a40c7b7a and the 2 august 2010 (fixed date), generated an error in that case. This happens when there are 2 consecutive sums in the graph, bad code was generated. Was fixed 2 August 2010
    Value:  False

warn.sum_div_dimshuffle_bug () 
    Doc:  Warn if previous versions of Theano (between rev. 3bd9b789f5e8, 2010-06-16, and cfc6322e5ad4, 2010-08-03) would have given incorrect result. This bug was triggered by sum of division of dimshuffled tensors.
    Value:  False

warn.subtensor_merge_bug () 
    Doc:  Warn if previous versions of Theano (before 0.5rc2) could have given incorrect results when indexing into a subtensor with negative stride (for instance, for instance, x[a:b:-1][c]).
    Value:  False

warn.gpu_set_subtensor1 () 
    Doc:  Warn if previous versions of Theano (before 0.6) could have given incorrect results when moving to the gpu set_subtensor(x[int vector], new_value)
    Value:  False

warn.vm_gc_bug () 
    Doc:  There was a bug that existed in the default Theano configuration, only in the development version between July 5th 2012 and July 30th 2012. This was not in a released version. If your code was affected by this bug, a warning will be printed during the code execution if you use the linker=vm,vm.lazy=True,warn.vm_gc_bug=True Theano flags. This warning is disabled by default as the bug was not released.
    Value:  False

warn.signal_conv2d_interface () 
    Doc:  Warn we use the new signal.conv2d() when its interface changed mid June 2014
    Value:  True

warn.reduce_join () 
    Doc:  Your current code is fine, but Theano versions prior to 0.7 (or this development version) might have given an incorrect result. To disable this warning, set the Theano flag warn.reduce_join to False. The problem was an optimization, that modified the pattern "Reduce{scalar.op}(Join(axis=0, a, b), axis=0)", did not check the reduction axis. So if the reduction axis was not 0, you got a wrong answer.
    Value:  True

warn.inc_set_subtensor1 () 
    Doc:  Warn if previous versions of Theano (before 0.7) could have given incorrect results for inc_subtensor and set_subtensor when using some patterns of advanced indexing (indexing with one vector or matrix of ints).
    Value:  True

compute_test_value (('off', 'ignore', 'warn', 'raise', 'pdb')) 
    Doc:  If 'True', Theano will run each op at graph build time, using Constants, SharedVariables and the tag 'test_value' as inputs to the function. This helps the user track down problems in the graph before it gets optimized.
    Value:  off

print_test_value () 
    Doc:  If 'True', the __eval__ of a Theano variable will return its test_value when this is available. This has the practical conseguence that, e.g., in debugging my_var will print the same as my_var.tag.test_value when a test value is defined.
    Value:  False

compute_test_value_opt (('off', 'ignore', 'warn', 'raise', 'pdb')) 
    Doc:  For debugging Theano optimization only. Same as compute_test_value, but is used during Theano optimization
    Value:  off

unpickle_function () 
    Doc:  Replace unpickled Theano functions with None. This is useful to unpickle old graphs that pickled them when it shouldn't
    Value:  True

reoptimize_unpickled_function () 
    Doc:  Re-optimize the graph when a theano function is unpickled from the disk.
    Value:  False

exception_verbosity (('low', 'high')) 
    Doc:  If 'low', the text of exceptions will generally refer to apply nodes with short names such as Elemwise{add_no_inplace}. If 'high', some exceptions will also refer to apply nodes with long descriptions  like:
    A. Elemwise{add_no_inplace}
            B. log_likelihood_v_given_h
            C. log_likelihood_h
    Value:  low

openmp () 
    Doc:  Allow (or not) parallel computation on the CPU with OpenMP. This is the default value used when creating an Op that supports OpenMP parallelization. It is preferable to define it via the Theano configuration file ~/.theanorc or with the environment variable THEANO_FLAGS. Parallelization is only done for some operations that implement it, and even for operations that implement parallelism, each operation is free to respect this flag or not. You can control the number of threads used with the environment variable OMP_NUM_THREADS. If it is set to 1, we disable openmp in Theano by default.
    Value:  False

openmp_elemwise_minsize () 
    Doc:  If OpenMP is enabled, this is the minimum size of vectors for which the openmp parallelization is enabled in element wise ops.
    Value:  200000

check_input () 
    Doc:  Specify if types should check their input in their C code. It can be used to speed up compilation, reduce overhead (particularly for scalars) and reduce the number of generated C files.
    Value:  True

cache_optimizations () 
    Doc:  WARNING: work in progress, does not work yet. Specify if the optimization cache should be used. This cache will any optimized graph and its optimization. Actually slow downs a lot the first optimization, and could possibly still contains some bugs. Use at your own risks.
    Value:  False

unittests.rseed () 
    Doc:  Seed to use for randomized unit tests. Special value 'random' means using a seed of None.
    Value:  666

compile.wait () 
    Doc:  Time to wait before retrying to aquire the compile lock.
    Value:  5

compile.timeout () 
    Doc:  In seconds, time that a process will wait before deciding to
override an existing lock. An override only happens when the existing
lock is held by the same owner *and* has not been 'refreshed' by this
owner for more than this period. Refreshes are done every half timeout
period for running processes.
    Value:  120

compiledir_format () 
    Doc:  Format string for platform-dependent compiled module subdirectory
(relative to base_compiledir). Available keys: gxx_version, hostname,
numpy_version, platform, processor, python_bitwidth,
python_int_bitwidth, python_version, short_platform, theano_version.
Defaults to 'compiledir_%(short_platform)s-%(processor)s-%(python_vers
ion)s-%(python_bitwidth)s'.
    Value:  compiledir_%(short_platform)s-%(processor)s-%(python_version)s-%(python_bitwidth)s


    Doc:  platform-independent root directory for compiled modules
    Value:  /home/lijin/.theano


    Doc:  platform-dependent cache directory for compiled modules
    Value:  /home/lijin/.theano/compiledir_Linux-3.13--generic-x86_64-with-Ubuntu-14.04-trusty-x86_64-2.7.6-64

cmodule.mac_framework_link () 
    Doc:  If set to True, breaks certain MacOS installations with the infamous Bus Error
    Value:  False

cmodule.warn_no_version () 
    Doc:  If True, will print a warning when compiling one or more Op with C code that can't be cached because there is no c_code_cache_version() function associated to at least one of those Ops.
    Value:  False

cmodule.remove_gxx_opt () 
    Doc:  If True, will remove the -O* parameter passed to g++.This is useful to debug in gdb modules compiled by Theano.The parameter -g is passed by default to g++
    Value:  False

cmodule.compilation_warning () 
    Doc:  If True, will print compilation warnings.
    Value:  False

cmodule.preload_cache () 
    Doc:  If set to True, will preload the C module cache at import time
    Value:  False

gcc.cxxflags () 
    Doc:  Extra compiler flags for gcc
    Value:  

metaopt.verbose () 
    Doc:  Enable verbose output for meta optimizers
    Value:  False

optdb.position_cutoff () 
    Doc:  Where to stop eariler during optimization. It represent the position of the optimizer where to stop.
    Value:  inf

optdb.max_use_ratio () 
    Doc:  A ratio that prevent infinite loop in EquilibriumOptimizer.
    Value:  5.0

profile () 
    Doc:  If VM should collect profile information
    Value:  False

profile_optimizer () 
    Doc:  If VM should collect optimizer profile information
    Value:  False

profile_memory () 
    Doc:  If VM should collect memory profile information and print it
    Value:  False


    Doc:  Useful only for the vm linkers. When lazy is None, auto detect if lazy evaluation is needed and use the apropriate version. If lazy is True/False, force the version used between Loop/LoopGC and Stack.
    Value:  None

optimizer_excluding () 
    Doc:  When using the default mode, we will remove optimizer with these tags. Separate tags with ':'.
    Value:  

optimizer_including () 
    Doc:  When using the default mode, we will add optimizer with these tags. Separate tags with ':'.
    Value:  

optimizer_requiring () 
    Doc:  When using the default mode, we will require optimizer with these tags. Separate tags with ':'.
    Value:  

DebugMode.patience () 
    Doc:  Optimize graph this many times to detect inconsistency
    Value:  10

DebugMode.check_c () 
    Doc:  Run C implementations where possible
    Value:  True

DebugMode.check_py () 
    Doc:  Run Python implementations where possible
    Value:  True

DebugMode.check_finite () 
    Doc:  True -> complain about NaN/Inf results
    Value:  True

DebugMode.check_strides () 
    Doc:  Check that Python- and C-produced ndarrays have same strides. On difference: (0) - ignore, (1) warn, or (2) raise error
    Value:  0

DebugMode.warn_input_not_reused () 
    Doc:  Generate a warning when destroy_map or view_map says that an op works inplace, but the op did not reuse the input for its output.
    Value:  True

DebugMode.check_preallocated_output () 
    Doc:  Test thunks with pre-allocated memory as output storage. This is a list of strings separated by ":". Valid values are: "initial" (initial storage in storage map, happens with Scan),"previous" (previously-returned memory), "c_contiguous", "f_contiguous", "strided" (positive and negative strides), "wrong_size" (larger and smaller dimensions), and "ALL" (all of the above).
    Value:  

DebugMode.check_preallocated_output_ndim () 
    Doc:  When testing with "strided" preallocated output memory, test all combinations of strides over that number of (inner-most) dimensions. You may want to reduce that number to reduce memory or time usage, but it is advised to keep a minimum of 2.
    Value:  4

profiling.time_thunks () 
    Doc:  Time individual thunks when profiling
    Value:  True

profiling.n_apply () 
    Doc:  Number of Apply instances to print by default
    Value:  20

profiling.n_ops () 
    Doc:  Number of Ops to print by default
    Value:  20

profiling.output_line_width () 
    Doc:  Max line width for the profiling output
    Value:  512

profiling.min_memory_size () 
    Doc:  For the memory profile, do not print Apply nodes if the size
             of their outputs (in bytes) is lower than this threshold
    Value:  1024

profiling.min_peak_memory () 
    Doc:  The min peak memory usage of the order
    Value:  False

profiling.destination () 
    Doc:  
             File destination of the profiling output

    Value:  stderr

profiling.debugprint () 
    Doc:  
             Do a debugprint of the profiled functions

    Value:  False

ProfileMode.n_apply_to_print () 
    Doc:  Number of apply instances to print by default
    Value:  15

ProfileMode.n_ops_to_print () 
    Doc:  Number of ops to print by default
    Value:  20

ProfileMode.min_memory_size () 
    Doc:  For the memory profile, do not print apply nodes if the size of their outputs (in bytes) is lower then this threshold
    Value:  1024

ProfileMode.profile_memory () 
    Doc:  Enable profiling of memory used by Theano functions
    Value:  False

on_shape_error (('warn', 'raise')) 
    Doc:  warn: print a warning and use the default value. raise: raise an error
    Value:  warn

tensor.insert_inplace_optimizer_validate_nb () 
    Doc:  -1: auto, if graph have less then 500 nodes 1, else 10
    Value:  -1

experimental.local_alloc_elemwise () 
    Doc:  DEPRECATED: If True, enable the experimental optimization local_alloc_elemwise. Generates error if not True. Use optimizer_excluding=local_alloc_elemwise to dsiable.
    Value:  True

experimental.local_alloc_elemwise_assert () 
    Doc:  When the local_alloc_elemwise is applied, add an assert to highlight shape errors.
    Value:  True

blas.ldflags () 
    Doc:  lib[s] to include for [Fortran] level-3 blas implementation
    Value:  -lblas

warn.identify_1pexp_bug () 
    Doc:  Warn if Theano versions prior to 7987b51 (2011-12-18) could have yielded a wrong result due to a bug in the is_1pexp function
    Value:  False

scan.allow_gc () 
    Doc:  Allow/disallow gc inside of Scan (default: False)
    Value:  False

scan.allow_output_prealloc () 
    Doc:  Allow/disallow memory preallocation for outputs inside of scan (default: True)
    Value:  True

pycuda.init () 
    Doc:  If True, always initialize PyCUDA when Theano want to
           initilize the GPU.  Currently, we must always initialize
           PyCUDA before Theano do it.  Setting this flag to True,
           ensure that, but always import PyCUDA.  It can be done
           manually by importing theano.misc.pycuda_init before theano
           initialize the GPU device.

    Value:  False

cublas.lib () 
    Doc:  Name of the cuda blas library for the linker.
    Value:  cublas

lib.cnmem () 
    Doc:  Do we enable CNMeM or not (a faster CUDA memory allocator).

             The parameter represent the start size (in MB or % of
             total GPU memory) of the memory pool.

             0: not enabled.
             0 < N <= 1: % of the total GPU memory (clipped to .985 for driver memory)
             > 0: use that number of MB of memory.

    Value:  0.0


Using gpu device 1: Tesla K10.G2.8GB (CNMeM is disabled)

这些配置影响着 theano 的运行,很多的参数都是只读的,因此,我们应当尽量避免在程序中直接修改这些参数

大部分参数都有指定的默认值,我们可以在 .theanorc 文件中对配置进行修改,也可以在环境变量 THEANO_FLAGS 中进行修改,它们的优先级顺序如下:

  • 首先是对 theano.config.<property> 的赋值
  • 然后是 THEANO_FLAGS 环境变量指定的内容
  • 最后是 .theanorc 文件或者 THEANORC 环境变量所指示的文件中的内容

具体的参数含义可以参考:

http://deeplearning.net/software/theano/library/config.html

环境变量 THEANO_FLAGS

使用 THEANO_FLAGS 环境变量,运行程序的方法如下:

THEANO_FLAGS='floatX=float32,device=gpu0,nvcc.fastmath=True'  python .py

如果是 window 下,则需要进行稍微的改动:

set THEANO_FLAGS='floatX=float32,device=gpu0,nvcc.fastmath=True' && python .py

示例中的配置将浮点数的精度设为了 32 位,并将使用 GPU 0CUDAfastmath 模式进行编译和运算。

配置文件 THEANORC

环境变量 THEANORC 的默认位置为 $HOME/.theanorcwindows 下为 $HOME/.theanorc:$HOME/.theanorc.txt)。

与前面 THEANO_FLAGS 指定的内容相同的配置文件为:

[global]
floatX = float32
device = gpu0

[nvcc]
fastmath = True

这里 [golbal] 对应的是 config 中的参数,如 config.device, config.modeconfig 的子模块中的参数,如 config.nvcc.fastmath, config.blas.ldflags 则需要用 [nvcc], [blas] 的部分去设置。

模式

每次调用 theano.function 的时候,那些符号变量之间的结构会被优化和计算,而优化和计算的模式都是由 config.mode 所决定的。

Theano 中定义了这四种模式:

  • FAST_COMPILE
    • compile.mode.Mode(linker='py', optimizer='fast_compile')
    • Python 实现,构造很快,运行慢
  • FAST_RUN
    • compile.mode.Mode(linker='cvm', optimizer='fast_run')
    • C 实现,构造较慢,运行快
  • DebugMode
    • compile.debugmode.DebugMode()
    • 调试模式,两种实现都可以
  • ProfileMode
    • compile.profilemode.ProfileMode()
    • C 实现,已经停用,使用 theano.profile 替代

更多的细节,可以参考:

http://deeplearning.net/software/theano/library/compile/mode.html#libdoc-compile-mode

Linkers

从上面的定义可以看出,一个模式由两部分构成,optimizerlinkerProfileModeDebugMode 模式使用自带的 linker

可用的 linker 可以从下表中查看:

http://deeplearning.net/software/theano/tutorial/modes.html#linkers

使用 DebugMode

一般在使用 FAST_RUN 或者 FAST_COMPILE 模式之前,最好先用 DebugMode 进行调试,不过速度会比前两个模式慢得多。

我们用一个实例看一下两者的区别:

x = T.dvector('x')

f_1 = theano.function([x], 10 / x)

print f_1([5])
print f_1([0])
print f_1([7])
[ 2.]
[ inf]
[ 1.42857143]

在非 Debug 模式下,除以 0 是合法的,但是在 DebugMode 下,会给出错误,帮助我们进行调试:

f_2 = theano.function([x], 10 / x, mode='DebugMode')

print f_2([5])
print f_2([0])
print f_2([7])
[ 2.]

---------------------------------------------------------------------------

InvalidValueError                         Traceback (most recent call last)

 in ()
      2 
      3 print f_2([5])
----> 4 print f_2([0])
      5 print f_2([7])

/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.pyc in __call__(self, *args, **kwargs)
    857         t0_fn = time.time()
    858         try:
--> 859             outputs = self.fn()
    860         except Exception:
    861             if hasattr(self.fn, 'position_of_error'):

/usr/local/lib/python2.7/dist-packages/theano/compile/debugmode.pyc in deco()
   2339                     self.maker.mode.check_isfinite
   2340                 try:
-> 2341                     return f()
   2342                 finally:
   2343                     # put back the filter_checks_isfinite

/usr/local/lib/python2.7/dist-packages/theano/compile/debugmode.pyc in f()
   2079                                 raise InvalidValueError(r, storage_map[r][0],
   2080                                                         hint='perform output',
-> 2081                                                         specific_hint=hint2)
   2082                         warn_inp = config.DebugMode.warn_input_not_reused
   2083                         py_inplace_outs = _check_inputs(

InvalidValueError: InvalidValueError
        type(variable) = TensorType(float64, vector)
        variable       = Elemwise{true_div,no_inplace}.0
        type(value)    = 
        dtype(value)   = float64
        shape(value)   = (1,)
        value          = [ inf]
        min(value)     = inf
        max(value)     = inf
        isfinite       = False
        client_node    = None
        hint           = perform output
        specific_hint  = non-finite elements not allowed
        context        = ...
  Elemwise{true_div,no_inplace} [id A] ''   
   |TensorConstant{(1,) of 10.0} [id B]
   |x [id C]

更多细节可以参考:

http://deeplearning.net/software/theano/library/compile/debugmode.html#debugmode

谨此笔记,记录过往。凭君阅览,如能收益,莫大奢望。
暂无评论

发送评论 编辑评论


				
|´・ω・)ノ
ヾ(≧∇≦*)ゝ
(☆ω☆)
(╯‵□′)╯︵┴─┴
 ̄﹃ ̄
(/ω\)
∠( ᐛ 」∠)_
(๑•̀ㅁ•́ฅ)
→_→
୧(๑•̀⌄•́๑)૭
٩(ˊᗜˋ*)و
(ノ°ο°)ノ
(´இ皿இ`)
⌇●﹏●⌇
(ฅ´ω`ฅ)
(╯°A°)╯︵○○○
φ( ̄∇ ̄o)
ヾ(´・ ・`。)ノ"
( ง ᵒ̌皿ᵒ̌)ง⁼³₌₃
(ó﹏ò。)
Σ(っ °Д °;)っ
( ,,´・ω・)ノ"(´っω・`。)
╮(╯▽╰)╭
o(*////▽////*)q
>﹏<
( ๑´•ω•) "(ㆆᴗㆆ)
😂
😀
😅
😊
🙂
🙃
😌
😍
😘
😜
😝
😏
😒
🙄
😳
😡
😔
😫
😱
😭
💩
👻
🙌
🖕
👍
👫
👬
👭
🌚
🌝
🙈
💊
😶
🙏
🍦
🍉
😣
Source: github.com/k4yt3x/flowerhd
颜文字
Emoji
小恐龙
花!
上一篇
下一篇