Skip to content

Conversation

tginart
Copy link
Contributor

@tginart tginart commented Oct 19, 2020

Adding Amsgrad improves numerical performance. This naive implementation requires the usage of dense gradients, which is not efficient.

This PR also includes a heuristic for better distribution of the embedding layers among the devices when using parallel training.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 19, 2020
@tginart
Copy link
Contributor Author

tginart commented Oct 19, 2020

@mnaumovfb

Please see above PR. Can be tested with:

python dlrm/dlrm_s_pytorch.py --use-gpu --md-flag --md-threshold=1 --md-temp=0.2 --arch-sparse-feature-size=400 --arch-mlp-bot="13-512-256-64-400" --arch-mlp-top="512-256-1" --data-generation=dataset --data-set=kaggle --raw-data-file='dlrm/input/train.txt' --processed-data-file='dlrm/input/kaggleAdDisplayChallenge_processed.npz' --loss-function=bce --round-targets=True --learning-rate=0.001 --mini-batch-size=2048 --print-freq=64 --print-time --test-freq=512 --test-mini-batch-size=2048 --solver=amsgrad --print-num-emb-params --use-emb-distrib-heuristic 2>&1 | tee run_kaggle_pt.log

Requires at least ~24GB, either on a single GPU or distributed across multiple.

Should achieve something like Testing at - 19186/19186 of epoch 0, loss 0.445875, accuracy 79.188 %, best 79.188 %

@mnaumovfb mnaumovfb mentioned this pull request Oct 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants