Speaker
Description
Deep learning, often considered powerful yet a black box, continues to capture attention due to the math and philosophy behind each block of its pipeline. Researchers convince the world of a method by thoroughly explaining every detail — from the structure of code blocks to the choice of each parameter. In sensitive domains, this level of explanation is even more critical, as responsible and explainable AI demands clarity in every decision.
Before arriving at that final solution, researchers often conduct dozens (or even hundreds) of experiments — changing components, tweaking parameters, or shifting philosophical approaches. Managing these experiments, along with related code and components, typically involves navigating a maze of folders and files. Over time, this becomes frustrating — especially when trying to compare experiments by specific changes or results. This can distract researchers from the real task: problem-solving, creativity, and analysis.
To address this, I am developing PyTorchLabFlow — a lightweight, offline-friendly experiment management framework that organizes and manages the mess of deep learning experiments in a modular and reproducible way.
My talk will cover:
-
The explanation of the mess in experimentation and why it matters in AI, especially deep learning.
-
Why researchers need to organize and structure everything they do.
-
A quick overview of PyTorchLabFlow: how it helps, how it works, and what it offers
PyTorchLabFlow is open-source, and a stable version is available on GitHub https://github.com/BBEK-Anand/PyTorchLabFlow