Supported operations#
This section describes all operations supported by oneDAL. For more information about general operation definition, refer to Operations section.
The table bellow specifies whether an algorithm’s descriptor can be used together with each operation.
Algorithm |
Operations |
||
---|---|---|---|
Yes |
Yes |
No |
|
No |
No |
Yes |
|
Yes |
Yes |
No |
|
Yes |
Yes |
No |
Train#
The train
operation performs training procedure of a machine
learning algorithm. The result obtained after the training contains a
model that can be passed to the infer
operation.
namespace oneapi::dal {
template <typename Descriptor>
using train_input_t = /* implementation defined */;
template <typename Descriptor>
using train_result_t = /* implementation defined */;
template <typename Descriptor>
train_result_t<Descriptor> train(
sycl::queue& queue,
const Descriptor& desc,
const train_input_t<Descriptor>& input);
} // namespace oneapi::dal
Infer#
The infer
operation performs inference procedure of a machine
learning algorithm based on the model obtained as a result of training.
namespace oneapi::dal {
template <typename Descriptor>
using infer_input_t = /* implementation defined */;
template <typename Descriptor>
using infer_result_t = /* implementation defined */;
template <typename Descriptor>
infer_result_t<Descriptor> infer(
sycl::queue& queue,
const Descriptor& desc,
const infer_input_t<Descriptor>& input);
} // namespace oneapi::dal
Compute#
The compute
operation is used if an algorithm does not have the well-defined
training and inference stages.
namespace oneapi::dal {
template <typename Descriptor>
using compute_input_t = /* implementation defined */;
template <typename Descriptor>
using compute_result_t = /* implementation defined */;
template <typename Descriptor>
compute_result_t<Descriptor> compute(
sycl::queue& queue,
const Descriptor& desc,
const compute_input_t<Descriptor>& input);
} // namespace oneapi::dal