Welcome to Jikai Jin's website
Welcome to Jikai Jin's website
Home
News
Publications
Experience
Contact
Light
Dark
Automatic
1
Minimax Optimal Kernel Operator Learning via Multilevel Training
We analyze the optimal learning rate of linear operator between Sobolev spaces and quality a setting where multilevel training is necessary to achieve the optimal rate.
Jikai Jin
,
Yiping Lu
,
Jose Blanchet
,
Lexing Ying
PDF
Cite
ArXiv
Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power
We provide theoretical evidence that the hardness of robust generalization may stem from the expressive power of deep neural networks. Even when standard generalization is easy, robust generalization provably requires the size of DNNs to be exponentially large.
Binghui Li
,
Jikai Jin
,
Han Zhong
,
John E. Hopcroft
,
Liwei Wang
PDF
Cite
ArXiv
Understanding Riemannian Acceleration via a Proximal Extragradient Framework
We provide an improved analysis of the convergence rates of clipping algorithms, theoretically justifying their superior performance in deep learning.
Jikai Jin
,
Suvrit Sra
PDF
Cite
ArXiv
Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis
We proposed the first non-asymptotic analysis of algorithms for DRO with non-convex losses. Our algorithm incorporates momentum and adaptive step size, and has superior empirical performance.
Jikai Jin
,
Bohang Zhang
,
Haiyang Wang
,
Liwei Wang
PDF
Cite
ArXiv
Improved analysis of clipping algorithms for non-convex optimization
We provide an improved analysis of the convergence rates of clipping algorithms, theoretically justifying their superior performance in deep learning.
Bohang Zhang
,
Jikai Jin
,
Cong Fang
,
Liwei Wang
PDF
Cite
ArXiv
Cite
×