There are two kinds
big basic models, such as CNN's imageNet, and now LLM. The current trend is the algorithm is more "brutal force" which needs more data training and more computation power. Transformer vs RNN, and CNN vs previous image ML are very good examples. The ML algorithm becomes more "dumb" in my opinion and thus becomes more powerful. Only big companies or several big startups can afford big model training.
The other one is some applications build on this basic layer, such as breast cancer image on image recognization, bank chatbot on LLM.