]> git.sesse.net Git - stockfish/commitdiff
Update default main net to nn-1ceb1ade0001.nnue
authorLinmiao Xu <linmiao.xu@gmail.com>
Fri, 1 Mar 2024 18:34:03 +0000 (10:34 -0800)
committerDisservin <disservin.social@gmail.com>
Thu, 7 Mar 2024 18:53:48 +0000 (19:53 +0100)
Created by retraining the previous main net `nn-b1a57edbea57.nnue` with:
- some of the same options as before:
  - ranger21, more WDL skipping, 15% more loss when Q is too high
- removal of the huge 514G pre-interleaved binpack
- removal of SF-generated dfrc data (dfrc99-16tb7p-filt-v2.min.binpack)
- interleaving many binpacks at training time
- training with some bestmove capture positions where SEE < 0
- increased usage of torch.compile to speed up training by up to 40%

```yaml
experiment-name: 2560--S10-dfrc0-to-dec2023-skip-more-wdl-15p-more-loss-high-q-see-ge0-sk28
nnue-pytorch-branch: linrock/nnue-pytorch/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more
start-from-engine-test-net: True

early-fen-skipping: 28
training-dataset:
  # similar, not the exact same as:
  # https://github.com/official-stockfish/Stockfish/pull/4635
  - /data/S5-5af/leela96.v2.min.binpack
  - /data/S5-5af/test60-2021-11-12-novdec-12tb7p.v6-dd.min.binpack
  - /data/S5-5af/test77-2021-12-dec-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test78-2022-01-to-05-jantomay-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test78-2022-06-to-09-juntosep-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test79-2022-04-apr-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test79-2022-05-may-16tb7p.v6-dd.min.binpack

  - /data/S5-5af/test80-2022-06-jun-16tb7p.v6-dd.min.unmin.binpack
  - /data/S5-5af/test80-2022-07-jul-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test80-2022-08-aug-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test80-2022-09-sep-16tb7p.v6-dd.min.unmin.binpack
  - /data/S5-5af/test80-2022-10-oct-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test80-2022-11-nov-16tb7p.v6-dd.min.binpack

  - /data/S5-5af/test80-2023-01-jan-16tb7p.v6-sk20.min.binpack
  - /data/S5-5af/test80-2023-02-feb-16tb7p.v6-dd.min.binpack
  - /data/S5-5af/test80-2023-03-mar-2tb7p.min.unmin.binpack
  - /data/S5-5af/test80-2023-04-apr-2tb7p.binpack
  - /data/S5-5af/test80-2023-05-may-2tb7p.min.dd.binpack

  # https://github.com/official-stockfish/Stockfish/pull/4782
  - /data/S6-1ee1aba5ed/test80-2023-06-jun-2tb7p.binpack
  - /data/S6-1ee1aba5ed/test80-2023-07-jul-2tb7p.min.binpack

  # https://github.com/official-stockfish/Stockfish/pull/4972
  - /data/S8-baff1edbea57/test80-2023-08-aug-2tb7p.v6.min.binpack
  - /data/S8-baff1edbea57/test80-2023-09-sep-2tb7p.binpack
  - /data/S8-baff1edbea57/test80-2023-10-oct-2tb7p.binpack

  # https://github.com/official-stockfish/Stockfish/pull/5056
  - /data/S9-b1a57edbea57/test80-2023-11-nov-2tb7p.binpack
  - /data/S9-b1a57edbea57/test80-2023-12-dec-2tb7p.binpack

num-epochs: 800
lr: 4.375e-4
gamma: 0.995
start-lambda: 1.0
end-lambda: 0.7
```

This particular net was reached at epoch 759. Use of more torch.compile decorators
in nnue-pytorch model.py than in the previous main net training run sped up training
by up to 40% on Tesla gpus when using recent pytorch compiled with cuda 12:
https://github.com/linrock/nnue-tools/blob/7fb9831/Dockerfile

Skipping positions with bestmove captures where static exchange evaluation is >= 0
is based on the implementation from Sopel's NNUE training & experimentation log:
https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY
Experiment 293 - only skip captures with see>=0

Positions with bestmove captures where score == 0 are always skipped for
compatibility with minimized binpacks, since the original minimizer sets
scores to 0 for slight improvements in compression.

The trainer branch used was:
https://github.com/linrock/nnue-pytorch/tree/r21-more-wdl-skip-15p-more-loss-high-q-skip-see-ge0-torch-compile-more

Binpacks were renamed to be sorted chronologically by default when sorted by name.
The binpack data are otherwise the same as binpacks with similar names in the prior
naming convention.

Training data can be found at:
https://robotmoon.com/nnue-training-data/

Passed STC:
https://tests.stockfishchess.org/tests/view/65e3ddd1f2ef6c733362ae5c
LLR: 2.92 (-2.94,2.94) <0.00,2.00>
Total: 149792 W: 39153 L: 38661 D: 71978
Ptnml(0-2): 675, 17586, 37905, 18032, 698

Passed LTC:
https://tests.stockfishchess.org/tests/view/65e4d91c416ecd92c162a69b
LLR: 2.94 (-2.94,2.94) <0.50,2.50>
Total: 64416 W: 16517 L: 16135 D: 31764
Ptnml(0-2): 38, 7218, 17313, 7602, 37

closes https://github.com/official-stockfish/Stockfish/pull/5090

Bench: 1373183

src/evaluate.h

index 53928bf6494c833a6543fd6dbde7711e8f14432d..33fed3d5f5f1ad6905c6adcbd98bc4f29a26bb10 100644 (file)
@@ -39,7 +39,7 @@ Value evaluate(const Position& pos, int optimism);
 // The default net name MUST follow the format nn-[SHA256 first 12 digits].nnue
 // for the build process (profile-build and fishtest) to work. Do not change the
 // name of the macro, as it is used in the Makefile.
-#define EvalFileDefaultNameBig "nn-b1a57edbea57.nnue"
+#define EvalFileDefaultNameBig "nn-1ceb1ade0001.nnue"
 #define EvalFileDefaultNameSmall "nn-baff1ede1f90.nnue"
 
 struct EvalFile {