You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hello,thanks for sharing your code.
Can you tell me there is wheather goog strategy for trainning large scale dataset. I have trained sphereface-20 with ms-celeb-1m.Then I finetune spherefacenet20_ms-celeb-1m.caffemodel with VGGFace2 datasets. At the begin of training,the loss is 12 ,iterater 20000 times,loss decrease to 9,after that the loss did chage.I try setting lambda_min = 10 or decreate lr=0.001 but it did not converge. Should I set the hyperparameters and is it really possible to converge? Can you help me?
Using the settings in my prototxt:
type:SINGLE
base: 5
gamma: 0
power: 1
lambda_min: 5
The text was updated successfully, but these errors were encountered:
Using the settings in my prototxt:
type:SINGLE
base: 5
gamma: 0
power: 1
lambda_min: 5
The text was updated successfully, but these errors were encountered: