Data types¶
oneDNN supports multiple data types. However, the 32-bit IEEE single-precision floating-point data type is the fundamental type in oneDNN. It is the only data type that must be supported by an implementation. All the other types discussed below are optional.
Primitives operating on the single-precision floating-point data type consume data, produce, and store intermediate results using the same data type.
Moreover, single-precision floating-point data type is often used for intermediate results in the mixed precision computations because it provides better accuracy. For example, the elementwise primitive and elementwise post-ops always use it internally.
oneDNN uses the following enumeration to refer to data types it supports:
-
enum
dnnl::memory
::
data_type
¶ Data type specification.
Values:
-
enumerator
undef
¶ Undefined data type (used for empty memory descriptors).
-
enumerator
f16
¶
-
enumerator
bf16
¶ non-standard 16-bit floating point with 7-bit mantissa.
-
enumerator
f32
¶
-
enumerator
s32
¶ 32-bit signed integer.
-
enumerator
s8
¶ 8-bit signed integer.
-
enumerator
u8
¶ 8-bit unsigned integer.
-
enumerator
oneDNN supports training and inference with the following data types:
Usage mode |
Data types |
---|---|
inference |
|
training |
Note
Using lower precision arithmetic may require changes in the deep learning model implementation.
Individual primitives may have additional limitations with respect to data type support based on the precision requirements. The list of data types supported by each primitive is included in the corresponding sections of the specification guide.