Collection of usefull `Optimizers` their variants.
Ranger
[source]
Ranger
(params
:Iterable
,betas
:Tuple
[float
,float
]=(0.95, 0.999)
,eps
:float
=1e-05
,k
:int
=6
,alpha
:float
=0.5
,lr
=0.001
,weight_decay
=0
)
Convenience method for Lookahead
with RAdam
class
RangerGC
[source]
RangerGC
(params
:Iterable
,lr
:float
=0.001
,alpha
:float
=0.5
,k
:int
=6
,N_sma_threshhold
:int
=5
,betas
:Tuple
[float
,float
]=(0.95, 0.999)
,eps
:float
=1e-05
,weight_decay
:Union
[float
,int
]=0
,use_gc
:bool
=True
,gc_conv_only
:bool
=False
) ::Optimizer
Ranger deep learning optimizer - RAdam + Lookahead + Gradient Centralization, combined into one optimizer.
Source - https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer/blob/master/ranger/ranger.py
class
SGDP
[source]
SGDP
(params
:Iterable
,lr
=<required parameter>
,momentum
:Union
[float
,int
]=0
,dampening
:Union
[float
,int
]=0
,weight_decay
:Union
[float
,int
]=0
,nesterov
:bool
=False
,eps
:float
=1e-08
,delta
:float
=0.1
,wd_ratio
:Union
[float
,int
]=0.1
) ::Optimizer
SGDP Optimizer Implementation copied from https://github.com/clovaai/AdamP/blob/master/adamp/sgdp.py
class
AdamP
[source]
AdamP
(params
:Iterable
,lr
:Union
[float
,int
]=0.001
,betas
:Tuple
[float
,float
]=(0.9, 0.999)
,eps
:float
=1e-08
,weight_decay
:Union
[float
,int
]=0
,delta
:float
=0.1
,wd_ratio
:float
=0.1
,nesterov
:bool
=False
) ::Optimizer
AdamP Optimizer Implementation copied from https://github.com/clovaai/AdamP/blob/master/adamp/adamp.py