Andy WanginTowards Data ScienceThe Underlying Dangers Behind Large Batch Training SchemesThe Hows and Whys behind the Generalization Gap and How to Minimize itNov 12, 20222Nov 12, 20222
Andy WanginTowards Data ScienceWhy Using Learning Rate Schedulers In NNs May Be a Waste of TimeHint: Batch size is the key and it might not be what you think!Aug 5, 2022Aug 5, 2022
Andy WanginTowards Data ScienceWhat is Really “Fair” in Machine Learning?Evaluation and Representation of Fairness in modern MLOct 6, 2021Oct 6, 2021
Andy WanginTowards Data ScienceMy Journey to Kaggle Master at the Age of 14Tips and tricks from one of the youngest Kaggle Competition Master.Aug 29, 202111Aug 29, 202111
Andy WangBags of Tricks for Multi-Label ClassificationTips and essentials to for boosting your model performance in multi-label classificationAug 26, 2021Aug 26, 2021
Andy WanginTowards Data ScienceNovel Approaches to Similarity LearningFrom Siamese network and Triplet Loss to ArcFace LossMar 25, 2021Mar 25, 2021