About me
Howdy! I am a research scientist at Google Research. I am interested in developing provably better optimization algorithms and generalization-improving techniques for machine learning (ML), especially under data-centric constraints such as restricted data access (e.g., due to privacy), poor data quality, computational limits imposed by large-scale data, and beyond. In general, I like to develop theoretically grounded ML algorithms.
I obtained my PhD in Computer Science from UT Austin, advised by Prof. Sujay Sanghavi and Prof. Inderjit S. Dhillon. Before that, I received the B.Tech. and M.Tech. degrees in Electrical Engineering from IIT Bombay. Here, I worked with Prof. Subhasis Chaudhuri and received the Undergraduate Research Award.
Papers
“Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing” - A Javanmard, R Das, A Epasto, and V Mirrokni.
Preprint. Download here.
“Upweighting Easy Samples in Fine-Tuning Mitigates Forgetting” - S Sanyal*, H Prairie*, R Das*, A Kavis*, and S Sanghavi (* denotes equal contribution).
ICML 2025 (spotlight). Download preprint here.
“Retraining with Predicted Hard Labels Provably Increases Model Accuracy” - R Das, I S Dhillon, A Epasto, A Javanmard, J Mao, V Mirrokni, S Sanghavi, and P Zhong.
ICML 2025. Download preprint here.
“Towards Quantifying the Preconditioning Effect of Adam” - R Das, N Agarwal, S Sanghavi, and I S Dhillon.
Preprint. Download here.
“Understanding the Training Speedup from Sampling with Approximate Losses” - R Das, X Chen, B Ieong, P Bansal, and S Sanghavi.
ICML 2024. Download paper here.
“Understanding Self-Distillation in the Presence of Label Noise” - R Das and S Sanghavi.
ICML 2023. Download paper here.
“On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data” - J Wang, R Das, G Joshi, S Kale, Z Xu, and T Zhang.
TMLR. Download paper here.
“Beyond Uniform Lipschitz Condition in Differentially Private Optimization” - R Das, S Kale, Z Xu, T Zhang, and S Sanghavi.
ICML 2023. Download paper here.
“Differentially Private Federated Learning with Normalized Updates” - R Das, A Hashemi, S Sanghavi, and I S Dhillon.
Download preprint here. Short version presented in OPT2022 workshop of NeurIPS 2022; download here.
“Faster Non-Convex Federated Learning via Global and Local Momentum” - R Das, A Acharya, A Hashemi, S Sanghavi, I S Dhillon, and U Topcu.
“On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization” - A Hashemi, A Acharya*, R Das*, H Vikalo, S Sanghavi, and I S Dhillon (* denotes equal contribution).
IEEE Transactions on Parallel and Distributed Systems. Download paper here and preprint here.
“On the Convergence of a Biased Version of Stochastic Gradient Descent” - R Das, J Zhang, and I S Dhillon.
NeurIPS 2019 Beyond First Order Methods in ML workshop. Download paper here.
“On the Separability of Classes with the Cross-Entropy Loss Function” - R Das and S Chaudhuri.
Preprint. Download here.
“Nonlinear Blind Compressed Sensing under Signal-Dependent Noise” - R Das and A Rajwade.
IEEE International Conference on Image Processing (ICIP) 2019. Download paper here.
“Sparse Kernel PCA for Outlier Detection” - R Das, A Golatkar, and S Awate.
IEEE International Conference on Machine Learning and Applications (ICMLA) 2018 Oral. Download paper here.
iFood Challenge, FGVC Workshop, CVPR 2018 - P Kothari*, A Sadhu*, A Golatkar*, and R Das* (* denotes equal contribution).
Finished $2^{nd}$ in the public leaderboard and $3^{rd}$ in the private leaderboard (Team name: Invincibles). Leaderboard Link. Invited to present our method at CVPR 2018 (slides).