Library-wise: Sadly, OpenCL is only an API with basic functions to harness GPU power for your custom kernels. Anything advanced you want, needs to be written by you or needs to come from higher-level implementations that use OpenCL which are a bit harder to obtain than CUDA. Just a bit harder. At least there are many options.
For example, if you need Fourier Transform on GPU, you will need at least one of the following:
clFFT
ViennaCL
Arrayfire
CLBlast
Numba-Roc
Tensorflow-cl
Eigen — to compare
FFTW — to compare CPU performance
Boost compute — maybe needed by others
Lapack — maybe needed by others
Mkl — maybe needed by others
AMP
SyCL — maybe needed by others
…
even then you may not have a good glue between two things as good as Nvidia’s fully-fledged CUDA libraries out of box. But its not NP-hard to do it nor to write your own (although slower) version of FFT.
Another advantage of CUDA is automatic optimizations on every new version. But with custom framework, every dependency needs to be updated individually since they don’t come from same source. If any two uses same thing as base dependency, you may not upgrade one without upgrading the other.