As a data scientist, I used to struggle with experiments involving the training and fine-tuning of large deep-learning models. For each experiment, I had to carefully document its hyperparameters (usually embedded in the model’s name, e.g., model_epochs_50_batch_100), the training dataset used, and the model's performance. Managing and reproducing experiments became highly complex during this time. Then, I discovered Kedro, which resolved all my issues. If you are conducting experiments in machine learning, I believe this article will prove immensely beneficial.
Read my full article published in Towards AI, here.