Stefano Cardanobile [Sun, 17 Oct 2021 17:01:45 +0000 (19:01 +0200)]
Reformat Eval::evaluate()
Non functional simplification: the goal of this patch is to make
the style in the evaluate() function similar to the rest of the code.
passed STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 95608 W: 24058 L: 24026 D: 47524
Ptnml(0-2): 292, 10379, 26396, 10479, 258
https://tests.stockfishchess.org/tests/view/
616c64fd99b580bf37797e4f
closes https://github.com/official-stockfish/Stockfish/pull/3744
Non-functional change
Stéphane Nicolet [Sun, 17 Oct 2021 11:06:33 +0000 (13:06 +0200)]
Remove noLMRExtension flag
This simplification patch removes the noLMRExtension flag. It was introduced in June
(see following link for that commit), but does not seem to be necessary anymore.
Link: https://github.com/official-stockfish/Stockfish/commit/e1f181ee643dcaa92c606b74b3abd23dede136cd
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 21200 W: 5369 L: 5228 D: 10603
Ptnml(0-2): 67, 2355, 5616, 2494, 68
https://tests.stockfishchess.org/tests/view/
616c03d299b580bf37797dcb
LTC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 37536 W: 9387 L: 9278 D: 18871
Ptnml(0-2): 23, 3988, 10643, 4085, 29
https://tests.stockfishchess.org/tests/view/
616c10f499b580bf37797ddd
closes https://github.com/official-stockfish/Stockfish/pull/3743
Bench:
4792969
Stéphane Nicolet [Sun, 17 Oct 2021 09:56:35 +0000 (11:56 +0200)]
Allow some LMR double extensions
Allow some LMR double extensions for the second and third sons of each node.
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 170320 W: 42608 L: 42187 D: 85525
Ptnml(0-2): 516, 19635, 44422, 20086, 501
https://tests.stockfishchess.org/tests/view/
616a9e3899b580bf37797cf4
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 74400 W: 18783 L: 18423 D: 37194
Ptnml(0-2): 46, 7812, 21129, 8162, 51
https://tests.stockfishchess.org/tests/view/
616b378499b580bf37797d61
closes https://github.com/official-stockfish/Stockfish/pull/3742
Bench:
4877152
Stefano Cardanobile [Thu, 14 Oct 2021 20:26:42 +0000 (22:26 +0200)]
Smooth improving
Smooth dependency on improvement margin in null move search.
STC
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17384 W: 4468 L: 4272 D: 8644
Ptnml(0-2): 42, 1919, 4592, 2079, 60
https://tests.stockfishchess.org/tests/view/
61689b8a1e5f6627cc1c0fdc
LTC
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 45648 W: 11525 L: 11243 D: 22880
Ptnml(0-2): 26, 4731, 13036, 4997, 34
https://tests.stockfishchess.org/tests/view/
6168a12c1e5f6627cc1c0fe3
It would be interesting to test if the other pruning/reduction heuristics
in master which are using the improving variable (ie the sign of improvement)
could benefit from a smooth function of the improvement value (or maybe a
Relu of the improvement value).
closes https://github.com/official-stockfish/Stockfish/pull/3740
Bench:
4916775
Joost VandeVondele [Wed, 6 Oct 2021 17:16:02 +0000 (19:16 +0200)]
Compute ttCapture earlier
Compute ttCapture earlier, and reuse.
passed STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 74128 W: 18640 L: 18578 D: 36910
Ptnml(0-2): 224, 7970, 20649, 7962, 259
https://tests.stockfishchess.org/tests/view/
615dd9fa1a32f4036ac7fc4d
closes https://github.com/official-stockfish/Stockfish/pull/3734
No functional change
bmc4 [Thu, 14 Oct 2021 03:44:46 +0000 (00:44 -0300)]
Simplify ttHitAverage away
Simplify ttHitAverage away, which was introduced in the following commit:
[here](https://github.com/BM123499/Stockfish/commit/
fe124896b241b4791454fd151da10101ad48f6d7)
A few tweaks with Elo gaining bounds have been tried to keep the code,
but they all failed:
https://tests.stockfishchess.org/tests/view/
61656f7683dd501a05b0b292
https://tests.stockfishchess.org/tests/view/
6165c0ca83dd501a05b0b2ca
https://tests.stockfishchess.org/tests/view/
6165bf9683dd501a05b0b2c8
https://tests.stockfishchess.org/tests/view/
6165719483dd501a05b0b29b
https://tests.stockfishchess.org/tests/view/
6166c7fd83dd501a05b0b353
https://tests.stockfishchess.org/tests/view/
6166c63b83dd501a05b0b350
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 58504 W: 14781 L: 14694 D: 29029
Ptnml(0-2): 175, 6718, 15426, 6711, 222
https://tests.stockfishchess.org/tests/view/
6165112c83dd501a05b0b257
LTC:
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 33480 W: 8448 L: 8332 D: 16700
Ptnml(0-2): 21, 3569, 9447, 3679, 24
https://tests.stockfishchess.org/tests/view/
61656fcf83dd501a05b0b294
change https://github.com/official-stockfish/Stockfish/pull/3739
bench:
4540339
Joseph Ellis [Wed, 13 Oct 2021 16:10:50 +0000 (11:10 -0500)]
Simplify multi-cut condition
Now that the multi-cut condition is safer, we can avoid the cost of the sub-search.
STC:
https://tests.stockfishchess.org/tests/view/
6165fd9283dd501a05b0b2fe
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 18648 W: 4745 L: 4600 D: 9303
Ptnml(0-2): 47, 2111, 4887, 2208, 71
LTC:
https://tests.stockfishchess.org/tests/view/
616629ea83dd501a05b0b320
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 41704 W: 10407 L: 10302 D: 20995
Ptnml(0-2): 35, 4425, 11823, 4538, 31
closes https://github.com/official-stockfish/Stockfish/pull/3738
Bench:
5905086
Michael Chaly [Fri, 8 Oct 2021 23:15:43 +0000 (02:15 +0300)]
Reduce more if multiple moves exceed alpha
Idea of this patch is the following: in case we already have four moves that
exceeded alpha in the current node, the probability of finding fifth should
be reasonably low. Note that four is completely arbitrary - there could and
probably should be some tweaks, both in tweaking best move count threshold
for more reductions and tweaking how they work - for example making more
reductions with best move count linearly.
passed STC:
https://tests.stockfishchess.org/tests/view/
615f614783dd501a05b0aee2
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 141816 W: 36056 L: 35686 D: 70074
Ptnml(0-2): 499, 15131, 39273, 15511, 494
passed LTC:
https://tests.stockfishchess.org/tests/view/
615fdff683dd501a05b0af35
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 68536 W: 17221 L: 16891 D: 34424
Ptnml(0-2): 38, 6573, 20725, 6885, 47
closes https://github.com/official-stockfish/Stockfish/pull/3736
Bench:
6131513
xoto10 [Thu, 27 May 2021 15:04:47 +0000 (16:04 +0100)]
Small clean-up, Sept 2021
Closes https://github.com/official-stockfish/Stockfish/pull/3485
No functional change
Stéphane Nicolet [Mon, 4 Oct 2021 18:37:26 +0000 (20:37 +0200)]
Capping stat bonus at 2000
This patch updates the stat_bonus() function (used in the history tables to
help move ordering), keeping the same quadratic for small depths but changing
the values for depth >= 9:
The old bonus formula was increasing from zero at depth 1 to 4100 at depth 14,
then used the strange, small value of 73 for all depths >= 15.
The new bonus formula increases from 0 at depth 1 to 2000 at depth 8, then
keeps 2000 for all depths >= 8.
passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 169624 W: 42875 L: 42454 D: 84295
Ptnml(0-2): 585, 19340, 44557, 19729, 601
https://tests.stockfishchess.org/tests/view/
615bd69e9d256038a969b97c
passed LTC:
LLR: 3.07 (-2.94,2.94) <0.50,3.50>
Total: 37336 W: 9456 L: 9191 D: 18689
Ptnml(0-2): 20, 3810, 10747, 4067, 24
https://tests.stockfishchess.org/tests/view/
615c75d99d256038a969b9b2
closes https://github.com/official-stockfish/Stockfish/pull/3731
Bench:
6261865
Joost VandeVondele [Tue, 5 Oct 2021 20:14:13 +0000 (22:14 +0200)]
Improve the Chess960 correction for cornered bishops
As Chess960 patches can not be tested on fishtest, this was locally tuned
and tested:
Elo: 2.36 +- 1.07
LOS: 0.999992
closes https://github.com/official-stockfish/Stockfish/pull/3730
Bench:
5714575
J. Oster [Tue, 5 Oct 2021 10:02:25 +0000 (12:02 +0200)]
Time-management fix in MultiPV mode.
When playing games in MultiPV mode we must take care to only track the
best move changing for the first PV line. Otherwise, SF will spend most
of its time for the initial moves after the book exit.
This has been observed and reported on Discord, but can also be seen in
games played in Stefan Pohl's MultiPV experiment.
Tested with MultiPV=4.
STC:
https://tests.stockfishchess.org/tests/view/
615c24b59d256038a969b990
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 1744 W: 694 L: 447 D: 603
Ptnml(0-2): 32, 125, 358, 278, 79
LTC:
https://tests.stockfishchess.org/tests/view/
615c31769d256038a969b993
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 2048 W: 723 L: 525 D: 800
Ptnml(0-2): 10, 158, 511, 314, 31
closes https://github.com/official-stockfish/Stockfish/pull/3729
Bench:
5714575
Michael Chaly [Sun, 3 Oct 2021 08:27:40 +0000 (11:27 +0300)]
Increase reductions with thread count
Respin of multi-thread idea that was simplified away recently: basically doing
more reductions with thread count since Lazy SMP naturally widens search. With
drawish book this idea got simplified away but with less drawish book it again
gains elo, maybe trying to reinstall other ideas that were simplified away
previously can be beneficial.
passed STC
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 39736 W: 10205 L: 9986 D: 19545
Ptnml(0-2): 45, 4254, 11064, 4447, 58
https://tests.stockfishchess.org/tests/view/
615750702d02f48db3961b00
passed LTC
LLR: 2.97 (-2.94,2.94) <0.50,3.50>
Total: 60352 W: 15530 L: 15218 D: 29604
Ptnml(0-2): 24, 5900, 18016, 6212, 24
https://tests.stockfishchess.org/tests/view/
6157d8935488e26ea5eace7f
closes https://github.com/official-stockfish/Stockfish/pull/3724
Bench
5714575
Michael Chaly [Sun, 26 Sep 2021 04:39:27 +0000 (06:39 +0200)]
Extend quiet tt moves at PvNodes
Idea is to extend some quiet ttMoves if a lot of things indicate that
the transposition table move is going to be a good move:
1) move being a killer - so being the best move in nearby node;
2) reply continuation history is really good.
This is basically saying that move is good "in general" in this position,
that it is a good reply to the opponent move and that it was the best in
this position somewhere in search - so extending it makes a lot of sense.
In general in past year we had a lot of extensions of different types,
maybe there is something more in it :)
passed STC
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 42944 W: 10932 L: 10695 D: 21317
Ptnml(0-2): 141, 4869, 11210, 5116, 136
https://tests.stockfishchess.org/tests/view/
614cca8e7bdc23e77ceb89f0
passed LTC
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 156848 W: 39473 L: 38893 D: 78482
Ptnml(0-2): 125, 16327, 44913, 16961, 98
https://tests.stockfishchess.org/tests/view/
614cf93d7bdc23e77ceb8a13
closes https://github.com/official-stockfish/Stockfish/pull/3719
Bench:
5714575
Stéphane Nicolet [Sat, 25 Sep 2021 17:37:47 +0000 (19:37 +0200)]
Reduction instead of cutoff
In master, during singular move analysis, when both the transposition value
and a reduced search for the other moves seem to indicate a fail high, we
heuristically prune the whole subtree and return an fail high score.
This patch is a little bit more cautious in this case, and instead of the
risky cutoff, we now search the ttMove with a reduced depth (by two plies).
STC:
https://tests.stockfishchess.org/tests/view/
614dafe07bdc23e77ceb8a89
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 46728 W: 11909 L: 11666 D: 23153
Ptnml(0-2): 181, 5288, 12168, 5561, 166
LTC:
https://tests.stockfishchess.org/tests/view/
614dc84abe4c07e0ecac3c95
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 74520 W: 18809 L: 18450 D: 37261
Ptnml(0-2): 45, 7735, 21346, 8084, 50
closes https://github.com/official-stockfish/Stockfish/pull/3718
Bench:
5499262
OfekShochat [Thu, 23 Sep 2021 20:16:17 +0000 (23:16 +0300)]
Range reductions
adding reductions for when the delta between the static eval and the child's eval is consistently low.
passed STC
https://tests.stockfishchess.org/html/live_elo.html?
614d7b3c7bdc23e77ceb8a5d
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 88872 W: 22672 L: 22366 D: 43834
Ptnml(0-2): 343, 10150, 23117, 10510, 316
passed LTC
https://tests.stockfishchess.org/html/live_elo.html?
614daf3e7bdc23e77ceb8a82
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 24368 W: 6153 L: 5928 D: 12287
Ptnml(0-2): 13, 2503, 6937, 2708, 23
closes https://github.com/official-stockfish/Stockfish/pull/3717
Bench:
5443950
Stéphane Nicolet [Thu, 23 Sep 2021 09:20:03 +0000 (11:20 +0200)]
Tweak doubly singular condition (Topo's patch)
This patch relax a little bit the condition for doubly singular moves
(ie moves that are so forced that we think that they deserve a local
double extension of the search). We lower the margin and allow up to
six such double extensions in the path between the root and the critical
node.
Original idea by Siad Daboul (@TopoIogist) in PR #3709
Tested with the previous commit:
passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 33048 W: 8458 L: 8236 D: 16354
Ptnml(0-2): 120, 3701, 8660, 3923, 120
https://tests.stockfishchess.org/tests/view/
614b24347bdc23e77ceb88fe
passed LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 54176 W: 13712 L: 13406 D: 27058
Ptnml(0-2): 36, 5653, 15399, 5969, 31
https://tests.stockfishchess.org/tests/view/
614b3b727bdc23e77ceb8911
closes https://github.com/official-stockfish/Stockfish/pull/3714
Bench:
5792377
Stéphane Nicolet [Thu, 23 Sep 2021 21:19:06 +0000 (23:19 +0200)]
Detect search explosions
This patch detects some search explosions (due to double extensions in
search.cpp) which can happen in some pathological positions, and takes
measures to ensure progress in search even for these pathological situations.
While a small number of double extensions can be useful during search
(for example to resolve a tactical sequence), a sustained regime of
double extensions leads to search explosion and a non-finishing search.
See the discussion in https://github.com/official-stockfish/Stockfish/pull/3544
and the issue https://github.com/official-stockfish/Stockfish/issues/3532 .
The implemented algorithm is the following:
a) at each node during search, store the current depth in the stack.
Double extensions are by definition levels of the stack where the
depth at ply N is strictly higher than depth at ply N-1.
b) during search, calculate for each thread a running average of the
number of double extensions in the last 4096 visited nodes.
c) if one thread has more than 2% of double extensions for a sustained
period of time (6 millions consecutive nodes, or about 4 seconds on
my iMac), we decide that this thread is in an explosion state and
we calm down this thread by preventing it to do any double extension
for the next 6 millions nodes.
To calculate the running averages, we also introduced a auxiliary class
generalizing the computations of ttHitAverage variable we already had in
code. The implementation uses an exponential moving average of period 4096
and resolution 1/1024, and all computations are done with integers for
efficiency.
-----------
Example where the patch solves a search explosion:
```
./stockfish
ucinewgame
position fen 8/Pk6/8/1p6/8/P1K5/8/6B1 w - - 37 130
go infinite
```
This algorithm does not affect search in normal, non-pathological positions.
We verified, for instance, that the usual bench is unchanged up to depth 20
at least, and that the node numbers are unchanged for a search of the starting
position at depth 32.
-------------
See https://github.com/official-stockfish/Stockfish/pull/3714
Bench:
5575265
Michael Chaly [Mon, 20 Sep 2021 12:04:13 +0000 (15:04 +0300)]
Combo of various parameter tweaks
Combination of parameter tweaks in search, evaluation and time management.
Original patches by snicolet xoto10 lonfom169 and Vizvezdenec.
Includes:
* Use bigger grain of positional evaluation more frequently (up to 1 exchange difference in non-pawn-material);
* More extra time according to increment;
* Increase margin for singular extensions;
* Do more aggresive parent node futility pruning.
Passed STC
https://tests.stockfishchess.org/tests/view/
6147deab3733d0e0dd9f313d
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 45488 W: 11691 L: 11450 D: 22347
Ptnml(0-2): 145, 5208, 11824, 5395, 172
Passed LTC
https://tests.stockfishchess.org/tests/view/
6147f1d53733d0e0dd9f3141
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 62520 W: 15808 L: 15482 D: 31230
Ptnml(0-2): 43, 6439, 17960, 6785, 33
closes https://github.com/official-stockfish/Stockfish/pull/3710
bench
5575265
xoto10 [Thu, 16 Sep 2021 07:43:53 +0000 (08:43 +0100)]
Increase optimumTime by 10%
STC 10+0.1 :
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 47032 W: 12078 L: 11841 D: 23113
Ptnml(0-2): 159, 5098, 12746, 5373, 140
https://tests.stockfishchess.org/tests/view/
613f9df1f29dda16fcca8731
LTC 60+0.6 :
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 66248 W: 16631 L: 16301 D: 33316
Ptnml(0-2): 44, 6560, 19578, 6906, 36
https://tests.stockfishchess.org/tests/view/
6140603d7315e7c73204a4c1
Non-regression tests with other time control styles:
Moves/Time 40/10+0 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 51640 W: 13350 L: 13254 D: 25036
Ptnml(0-2): 183, 5770, 13797, 5908, 162
https://tests.stockfishchess.org/tests/view/
6141592b7315e7c73204a599
TCEC Style 10+0.01 :
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 20592 W: 5300 L: 5157 D: 10135
Ptnml(0-2): 81, 2240, 5544, 2317, 114
https://tests.stockfishchess.org/tests/view/
61425bb27315e7c73204a6a2
Sudden death 15+0 :
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 127104 W: 32728 L: 32741 D: 61635
Ptnml(0-2): 735, 13973, 34149, 13960, 735
https://tests.stockfishchess.org/tests/view/
614256a77315e7c73204a699
The first 3 tests were run with an initial version of the code, which was then modified to make the amount of extra time dependent on the size of increment. No increment gives no extra time, and the extra time given increases until an increment of 1% or more of remaining time gives 10% extra thinking time.
closes https://github.com/official-stockfish/Stockfish/pull/3702
Bench
6658747
SFisGOD [Mon, 13 Sep 2021 16:28:33 +0000 (00:28 +0800)]
Update default net to nn-
13406b1dcbe0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/
6134abc425b9b35584838572
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-
6762d36ad265.nnue
New net: nn-
c9fdeea14cb2.nnue
SPSA 2: https://tests.stockfishchess.org/tests/view/
61355b7e25b9b3558483860e
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-
c9fdeea14cb2.nnue
New net: nn-
0ddc28184f4c.nnue
SPSA 3: https://tests.stockfishchess.org/tests/view/
613737be0cd98ab40c0c9e4e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-
0ddc28184f4c.nnue
New net: nn-
2419828bb394.nnue
SPSA 4: https://tests.stockfishchess.org/tests/view/
613966ff689039fce12e0fe7
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-
2419828bb394.nnue
New net: nn-
05d9b1ee3037.nnue
SPSA 5: https://tests.stockfishchess.org/tests/view/
613b4a38689039fce12e1209
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-
05d9b1ee3037.nnue
New net: nn-
98c6ce0fc15f.nnue
SPSA 6: https://tests.stockfishchess.org/tests/view/
613e331515591e7c9ebc3fe9
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-
98c6ce0fc15f.nnue
New net: nn-
13406b1dcbe0.nnue
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 82008 W: 21044 L: 20752 D: 40212
Ptnml(0-2): 264, 9341, 21525, 9587, 287
https://tests.stockfishchess.org/tests/view/
613f7c6cf29dda16fcca870c
LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 182928 W: 46258 L: 45602 D: 91068
Ptnml(0-2): 107, 19448, 51712, 20076, 121
https://tests.stockfishchess.org/tests/view/
613fccb97315e7c73204a48c
Closes #3703
Bench:
6658747
xoto10 [Sun, 12 Sep 2021 08:19:38 +0000 (09:19 +0100)]
Update 2 search parameters after tune.
A tuning run on 3 search parameters was done with 200k games, narrow ranges (50-150%) and a small value for A (3% of total games) :
https://tests.stockfishchess.org/tests/view/
613b5f4b689039fce12e1220
STC 10+0.1 :
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 73112 W: 18800 L: 18520 D: 35792
Ptnml(0-2): 205, 8395, 19115, 8597, 244
https://tests.stockfishchess.org/tests/view/
613cb8d2689039fce12e1308
LTC 60+0.6 :
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 45616 W: 11604 L: 11321 D: 22691
Ptnml(0-2): 24, 4769, 12946, 5038, 31
https://tests.stockfishchess.org/tests/view/
613d07048253e53e97b55b32
closes https://github.com/official-stockfish/Stockfish/pull/3698
Bench
6504816
Michael Chaly [Fri, 10 Sep 2021 08:38:50 +0000 (11:38 +0300)]
Decrease depth for cutnodes with no tt move
By analogy to existing logic of decreasing depth for PvNodes w/o tt move
do the same for cutNodes.
Passed STC
https://tests.stockfishchess.org/tests/view/
613abf5a689039fce12e1155
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 90336 W: 23108 L: 22804 D: 44424
Ptnml(0-2): 286, 10316, 23642, 10656, 268
Passed LTC
https://tests.stockfishchess.org/tests/view/
613ae330689039fce12e1172
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 37736 W: 9607 L: 9346 D: 18783
Ptnml(0-2): 21, 3917, 10730, 4180, 20
closes https://github.com/official-stockfish/Stockfish/pull/3697
bench
5891181
Stefan Geschwentner [Tue, 7 Sep 2021 11:22:20 +0000 (13:22 +0200)]
Further improve history updates
Now even double history updates if a search failed low at an expected PV or CUT node.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 30736 W: 7891 L: 7674 D: 15171
Ptnml(0-2): 90, 3477, 8017, 3694, 90
https://tests.stockfishchess.org/tests/view/
61364ae30cd98ab40c0c9da5
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 73600 W: 18684 L: 18326 D: 36590
Ptnml(0-2): 41, 7734, 20899, 8078, 48
https://tests.stockfishchess.org/tests/view/
6136940f0cd98ab40c0c9df3
closes https://github.com/official-stockfish/Stockfish/pull/3694
Bench:
6030657
Stefan Geschwentner [Mon, 6 Sep 2021 09:22:58 +0000 (11:22 +0200)]
Improve history updates
If a search failed low at an expected PV or CUT node do greater history updates.
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 95112 W: 24293 L: 23982 D: 46837
Ptnml(0-2): 285, 10893, 24906, 11170, 302
https://tests.stockfishchess.org/tests/view/
6132aa1a2ffb3c36aceb926f
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 116352 W: 29450 L: 28975 D: 57927
Ptnml(0-2): 93, 12263, 32984, 12748, 88
https://tests.stockfishchess.org/tests/view/
613394d12ffb3c36aceb92f4
closes https://github.com/official-stockfish/Stockfish/pull/3693
Bench:
6130736
SFisGOD [Sun, 5 Sep 2021 00:33:01 +0000 (08:33 +0800)]
Update default net to nn-
6762d36ad265.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/
612cdb1fbb4956d8b78eb5ab
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-
fe433fd8c7f6.nnue
New net: nn-
5f134823db04.nnue
SPSA 2: https://tests.stockfishchess.org/tests/view/
612fcde645091e810014af19
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-
5f134823db04.nnue
New net: nn-
8eca5dd4e3f7.nnue
SPSA 3: https://tests.stockfishchess.org/tests/view/
6130822345091e810014af61
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-
8eca5dd4e3f7.nnue
New net: nn-
4556108e4f00.nnue
SPSA 4: https://tests.stockfishchess.org/tests/view/
613287652ffb3c36aceb923c
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-
4556108e4f00.nnue
New net: nn-
6762d36ad265.nnue
STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 162776 W: 41220 L: 40807 D: 80749
Ptnml(0-2): 517, 18800, 42359, 19177, 535
https://tests.stockfishchess.org/tests/view/
6134107125b9b35584838559
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 41056 W: 10428 L: 10156 D: 20472
Ptnml(0-2): 30, 4288, 11618, 4564, 28
https://tests.stockfishchess.org/tests/view/
6134ad6525b9b3558483857a
closes https://github.com/official-stockfish/Stockfish/pull/3691
Bench:
5812158
Michael Chaly [Sun, 5 Sep 2021 21:17:46 +0000 (00:17 +0300)]
Extend captures and promotions
This patch introduces extension for captures and promotions. Every capture or
promotion that is not the first move in the list gets extended at PvNodes and
cutNodes. Special thanks to @locutus2 - all my previous attepmts that failed
on this idea were done only for PvNodes - idea to include also cutNodes was
based on his latest passed patch.
STC
https://tests.stockfishchess.org/tests/view/
6134abf325b9b35584838574
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 188920 W: 47754 L: 47304 D: 93862
Ptnml(0-2): 595, 21754, 49344, 22140, 627
LTC
https://tests.stockfishchess.org/tests/view/
613521de25b9b355848385d7
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 8768 W: 2283 L: 2098 D: 4387
Ptnml(0-2): 7, 866, 2452, 1053, 6
closes https://github.com/official-stockfish/Stockfish/pull/3692
bench:
5564555
SFisGOD [Sun, 29 Aug 2021 16:19:46 +0000 (00:19 +0800)]
Update default net to nn-
735bba95dec0.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/
61286d8b62d20cf82b5ad1bd
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-
33495fe25081.nnue
New net: nn-
83e3cf2af92b.nnue
SPSA 2: https://tests.stockfishchess.org/tests/view/
6129cf2162d20cf82b5ad25f
Parameters: A total of 64 net biases were tuned (hidden layer 1)
Base net: nn-
83e3cf2af92b.nnue
New net: nn-
69a528eaef35.nnue
SPSA 3: https://tests.stockfishchess.org/tests/view/
612a0dcb62d20cf82b5ad2a0
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-
69a528eaef35.nnue
New net: nn-
735bba95dec0.nnue
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 95144 W: 24310 L: 23999 D: 46835
Ptnml(0-2): 232, 11059, 24748, 11232, 301
https://tests.stockfishchess.org/tests/view/
612bb3be0fdf40644b4b9996
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 33632 W: 8522 L: 8271 D: 16839
Ptnml(0-2): 18, 3511, 9516, 3744, 27
https://tests.stockfishchess.org/tests/view/
612ce5b9bb4956d8b78eb5b3
Closes https://github.com/official-stockfish/Stockfish/pull/3685
Bench:
5600615
VoyagerOne [Fri, 27 Aug 2021 18:25:09 +0000 (14:25 -0400)]
CMH Pruning Tweak
Tweak pruning formula by adding up CMH values.
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 14608 W: 3837 L: 3641 D: 7130
Ptnml(0-2): 27, 1681, 3723, 1815, 58
https://tests.stockfishchess.org/tests/view/
612792f362d20cf82b5ad156
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53520 W: 13580 L: 13276 D: 26664
Ptnml(0-2): 28, 5610, 15183, 5908, 31
https://tests.stockfishchess.org/tests/view/
6127d27062d20cf82b5ad191
closes https://github.com/official-stockfish/Stockfish/pull/3682
Bench:
5186641
SFisGOD [Thu, 26 Aug 2021 10:07:41 +0000 (18:07 +0800)]
Update default net to nn-
33495fe25081.nnue
STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 37368 W: 9621 L: 9391 D: 18356
Ptnml(0-2): 117, 4287, 9664, 4481, 135
https://tests.stockfishchess.org/tests/view/
612768165318138ee1204977
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 13328 W: 3446 L: 3246 D: 6636
Ptnml(0-2): 11, 1383, 3682, 1571, 17
https://tests.stockfishchess.org/tests/view/
6127dc8d62d20cf82b5ad196
Closes https://github.com/official-stockfish/Stockfish/pull/3679
Bench:
5179347
ppigazzini [Sun, 22 Aug 2021 13:44:30 +0000 (15:44 +0200)]
Use "pedantic" flag also for mingw
This will avoid to run in fishtest a test where the linux machines exit from
the building process and only the windows machines run the test.
See:
https://tests.stockfishchess.org/tests/view/
61122d732a8a49ac5be79996
https://github.com/SFisGOD/Stockfish/commit/
4e422577d6ebd1f6ecf606189190b8f6fb03f6c9#comments
closes https://github.com/official-stockfish/Stockfish/pull/3671
No functional change.
Joost VandeVondele [Thu, 26 Aug 2021 20:44:49 +0000 (22:44 +0200)]
Fix empty EvalFile option
some GUIs send an empty string for EvalFile, in that case explicitly try the default name
fixes https://github.com/official-stockfish/Stockfish/issues/3675
closes https://github.com/official-stockfish/Stockfish/pull/3678
No functional change.
bmc4 [Sat, 21 Aug 2021 04:53:03 +0000 (01:53 -0300)]
Simplify Declaration on Pawn Move Generation
Removes possible micro-optimization in favor of readability.
STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 75432 W: 5824 L: 5777 D: 63831
Ptnml(0-2): 178, 4648, 28036, 4657, 197
https://tests.stockfishchess.org/tests/view/
611fa7f84977aa1525c9cb75
LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 41200 W: 1156 L: 1106 D: 38938
Ptnml(0-2): 13, 981, 18562, 1031, 13
https://tests.stockfishchess.org/tests/view/
611fcc694977aa1525c9cb9b
Closes https://github.com/official-stockfish/Stockfish/pull/3669
No functional change
SFisGOD [Fri, 20 Aug 2021 10:30:27 +0000 (18:30 +0800)]
Update default net to nn-
517c4f68b5df.nnue
SPSA: https://tests.stockfishchess.org/tests/view/
611cf0da4977aa1525c9ca03
Parameters: 256 net weights and 8 net biases (output layer)
Base net: nn-
ac5605a608d6.nnue
New net: nn-
517c4f68b5df.nnue
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11600 W: 998 L: 851 D: 9751
Ptnml(0-2): 30, 705, 4186, 846, 33
https://tests.stockfishchess.org/tests/view/
611f84524977aa1525c9cb5b
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 9360 W: 338 L: 243 D: 8779
Ptnml(0-2): 0, 220, 4151, 303, 6
https://tests.stockfishchess.org/tests/view/
611f8c5b4977aa1525c9cb64
closes https://github.com/official-stockfish/Stockfish/pull/3667
Bench:
4844618
candirufish [Fri, 20 Aug 2021 08:37:22 +0000 (10:37 +0200)]
do more LMR extensions for PV nodes
LMR Pv and depth 6 Extension tweak:
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 52488 W: 1542 L: 1394 D: 49552
Ptnml(0-2): 18, 1253, 23552, 1405, 16
https://tests.stockfishchess.org/tests/view/
611e49c34977aa1525c9caa7
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 76216 W: 6000 L: 5784 D: 64432
Ptnml(0-2): 204, 4745, 28006, 4937, 216
https://tests.stockfishchess.org/tests/view/
611e0e254977aa1525c9ca89
closes https://github.com/official-stockfish/Stockfish/pull/3666
Bench:
5046381
bmc4 [Mon, 7 Jun 2021 04:20:39 +0000 (01:20 -0300)]
Simplify Null Move Search Reduction
slightly simpler formula for reduction computation.
first round of tests:
STC:
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 15632 W: 1319 L: 1204 D: 13109
Ptnml(0-2): 33, 956, 5733, 1051, 43
https://tests.stockfishchess.org/tests/view/
60bd03c7457376eb8bcaa600
LTC:
LLR: 3.37 (-2.94,2.94) <-2.50,0.50>
Total: 86296 W: 2814 L: 2779 D: 80703
Ptnml(0-2): 33, 2500, 38039, 2551, 25
https://tests.stockfishchess.org/tests/view/
60bd1ff0457376eb8bcaa653
recent tests:
STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 23936 W: 1895 L: 1793 D: 20248
Ptnml(0-2): 40, 1470, 8869, 1526, 63
https://tests.stockfishchess.org/tests/view/
611f9b7d4977aa1525c9cb6b
LTC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 62568 W: 1750 L: 1713 D: 59105
Ptnml(0-2): 19, 1560, 28085, 1605, 15
https://tests.stockfishchess.org/tests/view/
611fa4814977aa1525c9cb71
functional on high depth
closes https://github.com/official-stockfish/Stockfish/pull/3535
Bench:
5375286
Tomasz Sobczyk [Mon, 16 Aug 2021 10:19:26 +0000 (12:19 +0200)]
Optimize and tidy up affine transform code.
The new network caused some issues initially due to the very narrow neuron set between the first two FC layers. Necessary changes were hacked together to make it work. This patch is a mature approach to make the affine transform code faster, more readable, and easier to maintain should the layer sizes change again.
The following changes were made:
* ClippedReLU always produces a multiple of 32 outputs. This is about as good of a solution for AffineTransform's SIMD requirements as it can get without a bigger rewrite.
* All self-contained simd helpers are moved to a separate file (simd.h). Inline asm is utilized to work around GCC's issues with code generation and register assignment. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101693, https://godbolt.org/z/da76fY1n7
* AffineTransform has 2 specializations. While it's more lines of code due to the boilerplate, the logic in both is significantly reduced, as these two are impossible to nicely combine into one.
1) The first specialization is for cases when there's >=128 inputs. It uses a different approach to perform the affine transform and can make full use of AVX512 without any edge cases. Furthermore, it has higher theoretical throughput because less loads are needed in the hot path, requiring only a fixed amount of instructions for horizontal additions at the end, which are amortized by the large number of inputs.
2) The second specialization is made to handle smaller layers where performance is still necessary but edge cases need to be handled. AVX512 implementation for this was ommited by mistake, a remnant from the temporary implementation for the new... This could be easily reintroduced if needed. A slightly more detailed description of both implementations is in the code.
Overall it should be a minor speedup, as shown on fishtest:
passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 51520 W: 4074 L: 3888 D: 43558
Ptnml(0-2): 111, 3136, 19097, 3288, 128
and various tests shown in the pull request
closes https://github.com/official-stockfish/Stockfish/pull/3663
No functional change
Tomasz Sobczyk [Fri, 13 Aug 2021 20:20:11 +0000 (22:20 +0200)]
Improve handling of the debug log file.
Fix handling of empty strings in uci options and reassigning of the log file
Fixes https://github.com/official-stockfish/Stockfish/issues/3650
Closes https://github.com/official-stockfish/Stockfish/pull/3655
No functional change
Torsten Hellwig [Wed, 18 Aug 2021 07:12:14 +0000 (09:12 +0200)]
Update default net to nn-
ac5605a608d6.nnue
This net was created with the nnue-pytorch trainer, it used the previous master net as a starting point.
The training data includes all T60 data (https://drive.google.com/drive/folders/1rzZkgIgw7G5vQMLr2hZNiUXOp7z80613), all T74 data (https://drive.google.com/drive/folders/1aFUv3Ih3-A8Vxw9064Kw_FU4sNhMHZU-) and the wrongNNUE_02_d9.binpack (https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq). The Leela data were randomly named and then concatenated. All data was merged into one binpack using interleave_binpacks.py.
python3 train.py \
../data/t60_t74_wrong.binpack \
../data/t60_t74_wrong.binpack \
--resume-from-model ../data/nn-
e8321e467bf6.pt \
--gpus 1 \
--threads 4 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 300 \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--max_epochs=600 \
--seed $RANDOM \
--default_root_dir ../output/exp_24
STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 15320 W: 1415 L: 1257 D: 12648
Ptnml(0-2): 50, 1002, 5402, 1152, 54
https://tests.stockfishchess.org/tests/view/
611c404a4977aa1525c9c97f
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 9440 W: 345 L: 248 D: 8847
Ptnml(0-2): 3, 222, 4175, 315, 5
https://tests.stockfishchess.org/tests/view/
611c6c7d4977aa1525c9c996
LTC with UHO_XXL_+0.90_+1.19.epd:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 6232 W: 1638 L: 1459 D: 3135
Ptnml(0-2): 5, 592, 1744, 769, 6
https://tests.stockfishchess.org/tests/view/
611c9b214977aa1525c9c9cb
closes https://github.com/official-stockfish/Stockfish/pull/3664
Bench:
5375286
Joost VandeVondele [Sun, 15 Aug 2021 13:11:04 +0000 (15:11 +0200)]
Regenerate dependencies on code change
fixes https://github.com/official-stockfish/Stockfish/issues/3658
dependencies are now regenerated for each code change, this adds some 1s overhead in compile time, but avoids potential miscompilations or build problems.
closes https://github.com/official-stockfish/Stockfish/pull/3659
No functional change
Tomasz Sobczyk [Tue, 27 Jul 2021 13:00:22 +0000 (15:00 +0200)]
New NNUE architecture and net
Introduces a new NNUE network architecture and associated network parameters
The summary of the changes:
* Position for each perspective mirrored such that the king is on e..h files. Cuts the feature transformer size in half, while preserving enough knowledge to be good. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.b40q4rb1w7on.
* The number of neurons after the feature transformer increased two-fold, to 1024x2. This is possibly mostly due to the now very optimized feature transformer update code.
* The number of neurons after the second layer is reduced from 16 to 8, to reduce the speed impact. This, perhaps surprisingly, doesn't harm the strength much. See https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit#heading=h.6qkocr97fezq
The AffineTransform code did not work out-of-the box with the smaller number of neurons after the second layer, so some temporary changes have been made to add a special case for InputDimensions == 8. Also additional 0 padding is added to the output for some archs that cannot process inputs by <=8 (SSE2, NEON). VNNI uses an implementation that can keep all outputs in the registers while reducing the number of loads by 3 for each 16 inputs, thanks to the reduced number of output neurons. However GCC is particularily bad at optimization here (and perhaps why the current way the affine transform is done even passed sprt) (see https://docs.google.com/document/d/1gTlrr02qSNKiXNZ_SuO4-RjK4MXBiFlLE6jvNqqMkAY/edit# for details) and more work will be done on this in the following days. I expect the current VNNI implementation to be improved and extended to other architectures.
The network was trained with a slightly modified version of the pytorch trainer (https://github.com/glinscott/nnue-pytorch); the changes are in https://github.com/glinscott/nnue-pytorch/pull/143
The training utilized 2 datasets.
dataset A - https://drive.google.com/file/d/1VlhnHL8f-20AXhGkILujnNXHwy9T-MQw/view?usp=sharing
dataset B - as described in https://github.com/official-stockfish/Stockfish/commit/
ba01f4b95448bcb324755f4dd2a632a57c6e67bc
The training process was as following:
train on dataset A for 350 epochs, take the best net in terms of elo at 20k nodes per move (it's fine to take anything from later stages of training).
convert the .ckpt to .pt
--resume-from-model from the .pt file, train on dataset B for <600 epochs, take the best net. Lambda=0.8, applied before the loss function.
The first training command:
python3 train.py \
../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
../nnue-pytorch-training/data/large_gensfen_multipvdiff_100_d9.binpack \
--gpus "$3," \
--threads 1 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--smart-fen-skipping \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=1.0 \
--max_epochs=600 \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
The second training command:
python3 serialize.py \
--features=HalfKAv2_hm^ \
../nnue-pytorch-training/experiment_131/run_6/default/version_0/checkpoints/epoch-499.ckpt \
../nnue-pytorch-training/experiment_$1/base/base.pt
python3 train.py \
../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
../nnue-pytorch-training/data/michael_commit_b94a65.binpack \
--gpus "$3," \
--threads 1 \
--num-workers 1 \
--batch-size 16384 \
--progress_bar_refresh_rate 20 \
--smart-fen-skipping \
--random-fen-skipping 3 \
--features=HalfKAv2_hm^ \
--lambda=0.8 \
--max_epochs=600 \
--resume-from-model ../nnue-pytorch-training/experiment_$1/base/base.pt \
--default_root_dir ../nnue-pytorch-training/experiment_$1/run_$2
STC: https://tests.stockfishchess.org/tests/view/
611120b32a8a49ac5be798c4
LLR: 2.97 (-2.94,2.94) <-0.50,2.50>
Total: 22480 W: 2434 L: 2251 D: 17795
Ptnml(0-2): 101, 1736, 7410, 1865, 128
LTC: https://tests.stockfishchess.org/tests/view/
611152b32a8a49ac5be798ea
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9776 W: 442 L: 333 D: 9001
Ptnml(0-2): 5, 295, 4180, 402, 6
closes https://github.com/official-stockfish/Stockfish/pull/3646
bench:
5189338
Joost VandeVondele [Thu, 5 Aug 2021 14:34:37 +0000 (16:34 +0200)]
Revert futility pruning patches
reverts
09b6d28391cf582d99897360b225bcbbe38dd1c6 and
dbd7f602d3c7622df294f87d7239b5aaf31f695f that significantly impact mate
finding capabilities. For example on ChestUCI_23102018.epd, at 1M nodes,
the number of mates found is nearly reduced 2x without these depth conditions:
sf6 2091
sf7 2093
sf8 2107
sf9 2062
sf10 2208
sf11 2552
sf12 2563
sf13 2509
sf14 2427
master 1246
patched 2467
(script for testing at https://github.com/official-stockfish/Stockfish/files/
6936412/matecheck.zip)
closes https://github.com/official-stockfish/Stockfish/pull/3641
fixes https://github.com/official-stockfish/Stockfish/issues/3627
Bench:
5467570
VoyagerOne [Thu, 5 Aug 2021 12:50:24 +0000 (08:50 -0400)]
SEE simplification
Simplified SEE formula by removing std::min. Should also be easier to tune.
STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 22656 W: 1836 L: 1729 D: 19091
Ptnml(0-2): 54, 1426, 8267, 1521, 60
https://tests.stockfishchess.org/tests/view/
610ae62f2a8a49ac5be79449
LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 26248 W: 806 L: 744 D: 24698
Ptnml(0-2): 6, 668, 11715, 728, 7
https://tests.stockfishchess.org/tests/view/
610b17ad2a8a49ac5be79466
closes https://github.com/official-stockfish/Stockfish/pull/3643
bench:
4915145
SFisGOD [Wed, 4 Aug 2021 11:26:06 +0000 (19:26 +0800)]
Update default net to nn-
46832cfbead3.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/
6100e7f096b86d98abf6a832
Parameters: A total of 256 net weights and 8 net biases were tuned (output layer)
Base net: nn-
56a5f1c4173a.nnue
New net: nn-
ec3c8e029926.nnue
SPSA 2: https://tests.stockfishchess.org/tests/view/
610733caafad2da4f4ae3da7
Parameters: A total of 256 net biases were tuned (hidden layer 2)
Base net: nn-
ec3c8e029926.nnue
New net: nn-
46832cfbead3.nnue
STC:
LLR: 2.98 (-2.94,2.94) <-0.50,2.50>
Total: 50520 W: 3953 L: 3765 D: 42802
Ptnml(0-2): 138, 3063, 18678, 3235, 146
https://tests.stockfishchess.org/tests/view/
610a79692a8a49ac5be793f4
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 57256 W: 1723 L: 1566 D: 53967
Ptnml(0-2): 12, 1442, 25568, 1589, 17
https://tests.stockfishchess.org/tests/view/
610ac5bb2a8a49ac5be79434
Closes https://github.com/official-stockfish/Stockfish/pull/3642
Bench:
5359314
Stefan Geschwentner [Tue, 3 Aug 2021 14:32:48 +0000 (16:32 +0200)]
Simplify new cmh pruning thresholds by using directly a quadratic formula.
This decouples also the stat bonus updates from the threshold which creates less dependencies for tuning of stat bonus parameters.
Perhaps a further fine tuning of the now separated coefficients for constHist[0] and constHist[1] could give further gains.
STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 78384 W: 6134 L: 6090 D: 66160
Ptnml(0-2): 207, 5013, 28705, 5063, 204
https://tests.stockfishchess.org/tests/view/
6106d235afad2da4f4ae3d4b
LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 38176 W: 1149 L: 1095 D: 35932
Ptnml(0-2): 6, 1000, 17030, 1038, 14
https://tests.stockfishchess.org/tests/view/
6107a080afad2da4f4ae3def
closes https://github.com/official-stockfish/Stockfish/pull/3639
Bench:
5098146
VoyagerOne [Mon, 2 Aug 2021 17:52:48 +0000 (13:52 -0400)]
Futile pruning simplification
Remove CMH conditions in futile pruning.
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 93520 W: 7165 L: 7138 D: 79217
Ptnml(0-2): 222, 5923, 34427, 5982, 206
https://tests.stockfishchess.org/tests/view/
61083104e50a153c346ef8df
LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 59072 W: 1746 L: 1706 D: 55620
Ptnml(0-2): 13, 1562, 26353, 1588, 20
https://tests.stockfishchess.org/tests/view/
610894f2e50a153c346ef913
closes https://github.com/official-stockfish/Stockfish/pull/3638
Bench:
5229673
VoyagerOne [Sat, 31 Jul 2021 12:18:49 +0000 (08:18 -0400)]
CMH Pruning Tweak
replace CounterMovePruneThreshold by a depth dependent threshold
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 35512 W: 2718 L: 2552 D: 30242
Ptnml(0-2): 66, 2138, 13194, 2280, 78
https://tests.stockfishchess.org/tests/view/
6104442fafad2da4f4ae3b94
LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36536 W: 1150 L: 1019 D: 34367
Ptnml(0-2): 10, 920, 16278, 1049, 11
https://tests.stockfishchess.org/tests/view/
6104b033afad2da4f4ae3bbc
closes https://github.com/official-stockfish/Stockfish/pull/3636
Bench:
5848718
Tomasz Sobczyk [Tue, 27 Jul 2021 20:12:14 +0000 (22:12 +0200)]
Avoid unnecessary stores in the affine transform
This patch improves the codegen in the AffineTransform::forward function for architectures >=SSSE3. Current code works directly on memory and the compiler cannot see that the stores through outptr do not alias the loads through weights and input32. The solution implemented is to perform the affine transform with local variables as accumulators and only store the result to memory at the end. The number of accumulators required is OutputDimensions / OutputSimdWidth, which means that for the 1024->16 affine transform it requires 4 registers with SSSE3, 2 with AVX2, 1 with AVX512. It also cuts the number of stores required by NumRegs * 256 for each node evaluated. The local accumulators are expected to be assigned to registers, but even if this cannot be done in some case due to register pressure it will help the compiler to see that there is no aliasing between the loads and stores and may still result in better codegen.
See https://godbolt.org/z/59aTKbbYc for codegen comparison.
passed STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140328 W: 10635 L: 10358 D: 119335
Ptnml(0-2): 302, 8339, 52636, 8554, 333
closes https://github.com/official-stockfish/Stockfish/pull/3634
No functional change
SFisGOD [Tue, 27 Jul 2021 16:43:58 +0000 (00:43 +0800)]
Update default net to nn-
56a5f1c4173a.nnue
SPSA 1: https://tests.stockfishchess.org/tests/view/
60fd24efd8a6b65b2f3a796e
Parameters: A total of 256 net biases were tuned (hidden layer 2)
New best values: Half of the changes from the tuning run
New net: nn-
5992d3ba79f3.nnue
SPSA 2: https://tests.stockfishchess.org/tests/view/
60fec7d6d8a6b65b2f3a7aa2
Parameters: A total of 128 net biases were tuned (hidden layer 1)
New best values: Half of the changes from the tuning run
New net: nn-
56a5f1c4173a.nnue
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 140392 W: 10863 L: 10578 D: 118951
Ptnml(0-2): 347, 8754, 51718, 9021, 356
https://tests.stockfishchess.org/tests/view/
610037e396b86d98abf6a79e
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 14216 W: 454 L: 355 D: 13407
Ptnml(0-2): 4, 323, 6356, 420, 5
https://tests.stockfishchess.org/tests/view/
61019995afad2da4f4ae3a3c
Closes #3633
Bench:
4801359
SFisGOD [Sun, 25 Jul 2021 11:43:25 +0000 (19:43 +0800)]
Update default net to nn-
26abeed38351.nnue
SPSA: https://tests.stockfishchess.org/tests/view/
60fba335d8a6b65b2f3a7891
New best values: Half of the changes from the tuning run.
Setting: nodestime=300 with 10+0.1 (approximate real TC is 2.5 seconds)
The rest is the same as described in #3593
The change from nodestime=600 to 300 was suggested by gekkehenker to prevent time losses for some slow workers
SFisGOD@
94cd757#commitcomment-
53324840
STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 67448 W: 5241 L: 5036 D: 57171
Ptnml(0-2): 151, 4198, 24827, 4391, 157
https://tests.stockfishchess.org/tests/view/
60fd50f2d8a6b65b2f3a798e
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 48752 W: 1504 L: 1358 D: 45890
Ptnml(0-2): 13, 1226, 21754, 1368, 15
https://tests.stockfishchess.org/tests/view/
60fd7bb2d8a6b65b2f3a79a9
Closes https://github.com/official-stockfish/Stockfish/pull/3630
Bench:
5124774
Giacomo Lorenzetti [Sat, 24 Jul 2021 20:03:29 +0000 (22:03 +0200)]
Simplification in LMR
This commit removes the `!captureOrPromotion` condition from ttCapture reduction and from good/bad history reduction (similar to #3619).
passed STC:
https://tests.stockfishchess.org/tests/view/
60fc734ad8a6b65b2f3a7922
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 48680 W: 3855 L: 3776 D: 41049
Ptnml(0-2): 118, 3145, 17744, 3206, 127
passed LTC:
https://tests.stockfishchess.org/tests/view/
60fce7d5d8a6b65b2f3a794c
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 86528 W: 2471 L: 2450 D: 81607
Ptnml(0-2): 28, 2203, 38777, 2232, 24
closes https://github.com/official-stockfish/Stockfish/pull/3629
Bench:
4951406
MichaelB7 [Sat, 24 Jul 2021 12:42:00 +0000 (08:42 -0400)]
Update the default net to nn-
76a8a7ffb820.nnue.
combined work by Serio Vieri, Michael Byrne, and Jonathan D (aka SFisGod) based on top of previous developments, by restarts from good nets.
Sergio generated the net https://tests.stockfishchess.org/api/nn/nn-
d8609abe8caf.nnue:
The initial net nn-
d8609abe8caf.nnue is trained by generating around 16B of training data from the last master net nn-
9e3c6298299a.nnue, then trained, continuing from the master net, with lambda=0.2 and sampling ratio of 1. Starting with LR=2e-3, dropping LR with a factor of 0.5 until it reaches LR=5e-4. in_scaling is set to 361. No other significant changes made to the pytorch trainer.
Training data gen command (generates in chunks of 200k positions):
generate_training_data min_depth 9 max_depth 11 count 200000 random_move_count 10 random_move_max_ply 80 random_multi_pv 12 random_multi_pv_diff 100 random_multi_pv_depth 8 write_min_ply 10 eval_limit 1500 book noob_3moves.epd output_file_name gendata/$(date +"%Y%m%d-%H%M")_${HOSTNAME}.binpack
PyTorch trainer command (Note that this only trains for 20 epochs, repeatedly train until convergence):
python train.py --features "HalfKAv2^" --max_epochs 20 --smart-fen-skipping --random-fen-skipping 500 --batch-size 8192 --default_root_dir $dir --seed $RANDOM --threads 4 --num-workers 32 --gpus $gpuids --track_grad_norm 2 --gradient_clip_val 0.05 --lambda 0.2 --log_every_n_steps 50 $resumeopt $data $val
See https://github.com/sergiovieri/Stockfish/tree/tools_mod/rl for the scripts used to generate data.
Based on that Michael generated nn-
76a8a7ffb820.nnue in the following way:
The net being submitted was trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch
python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 30 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --auto_lr_find True --lambda=1.0 --max_epochs=240 --seed %random%%random% --default_root_dir exp/run_109 --resume-from-model ./pt/nn-
d8609abe8caf.pt
This run is thus started from Segio Vieri's net nn-
d8609abe8caf.nnue
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack
model.py modifications:
loss = torch.pow(torch.abs(p - q), 2.6).mean()
LR = 8.0e-5 calculated as follows: 1.5e-3*(.992^360) - the idea here was to take a highly trained net and just use all.binpack as a finishing micro refinement touch for the last 2 Elo or so. This net was discovered on the 59th epoch.
optimizer = ranger.Ranger(train_params, betas=(.90, 0.999), eps=1.0e-7, gc_loc=False, use_gc=False)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.992)
For this micro optimization, I had set the period to "5" in train.py. This changes the checkpoint output so that every 5th checkpoint file is created
The final touches were to adjust the NNUE scale, as was done by Jonathan in tests running at the same time.
passed LTC
https://tests.stockfishchess.org/tests/view/
60fa45aed8a6b65b2f3a77a4
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 53040 W: 1732 L: 1575 D: 49733
Ptnml(0-2): 14, 1432, 23474, 1583, 17
passed STC
https://tests.stockfishchess.org/tests/view/
60f9fee2d8a6b65b2f3a7775
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 37928 W: 3178 L: 3001 D: 31749
Ptnml(0-2): 100, 2446, 13695, 2623, 100.
closes https://github.com/official-stockfish/Stockfish/pull/3626
Bench:
5169957
Giacomo Lorenzetti [Sun, 18 Jul 2021 18:14:11 +0000 (20:14 +0200)]
Apply good/bad history reduction also when inCheck
Main idea is that, in some cases, 'in check' situations are not so different from 'not in check' ones.
Trying to use piece count in order to select only a few 'in check' situations have failed LTC testing.
It could be interesting to apply one of those ideas in other parts of the search function.
passed STC:
https://tests.stockfishchess.org/tests/view/
60f1b68dd1189bed71812d40
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 53472 W: 4078 L: 4008 D: 45386
Ptnml(0-2): 127, 3297, 19795, 3413, 104
passed LTC:
https://tests.stockfishchess.org/tests/view/
60f291e6d1189bed71812de3
LLR: 2.92 (-2.94,2.94) <-2.50,0.50>
Total: 89712 W: 2651 L: 2632 D: 84429
Ptnml(0-2): 60, 2261, 40188, 2294, 53
closes https://github.com/official-stockfish/Stockfish/pull/3619
Bench:
5185789
pb00067 [Thu, 15 Jul 2021 18:56:21 +0000 (20:56 +0200)]
Simplify lowply-history scoring logic
STC:
https://tests.stockfishchess.org/tests/view/
60eee559d1189bed71812b16
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 33976 W: 2523 L: 2431 D: 29022
Ptnml(0-2): 66, 2030, 12730, 2070, 92
LTC:
https://tests.stockfishchess.org/tests/view/
60eefa12d1189bed71812b24
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 107240 W: 3053 L: 3046 D: 101141
Ptnml(0-2): 56, 2668, 48154, 2697, 45
closes https://github.com/official-stockfish/Stockfish/pull/3616
bench:
5199177
Vizvezdenec [Sun, 18 Jul 2021 10:51:14 +0000 (13:51 +0300)]
Prune illegal moves in qsearch earlier
The main idea is that illegal moves influencing search or
qsearch obviously can't be any sort of good. The only reason
why initially legality checks for search and qsearch were done
after they actually can influence some heuristics is because
legality check is expensive computationally. Eventually in
search it was moved to the place where it makes sure that
illegal moves can't influence search.
This patch shows that the same can be done for qsearch + it
passed STC with elo-gaining bounds + it removes 3 lines of code
because one no longer needs to increment/decrement movecount
on illegal moves.
passed STC with elo-gaining bounds
https://tests.stockfishchess.org/tests/view/
60f20aefd1189bed71812da0
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 61512 W: 4688 L: 4492 D: 52332
Ptnml(0-2): 139, 3730, 22848, 3874, 165
The same version functionally but with moving condition ever earlier
passed LTC with simplification bounds.
https://tests.stockfishchess.org/tests/view/
60f292cad1189bed71812de9
LLR: 2.98 (-2.94,2.94) <-2.50,0.50>
Total: 60944 W: 1724 L: 1685 D: 57535
Ptnml(0-2): 11, 1556, 27298, 1597, 10
closes https://github.com/official-stockfish/Stockfish/pull/3618
bench
4709569
Liam Keegan [Wed, 21 Jul 2021 07:33:13 +0000 (09:33 +0200)]
Add macOS and windows to CI
- macOS
- system clang
- gcc
- windows / msys2
- mingw 64-bit gcc
- mingw 32-bit gcc
- minor code fixes to get new CI jobs to pass
- code: suppress unused-parameter warning on 32-bit windows
- Makefile: if arch=any on macos, don't specify arch at all
fixes https://github.com/official-stockfish/Stockfish/issues/2958
closes https://github.com/official-stockfish/Stockfish/pull/3623
No functional change
VoyagerOne [Mon, 12 Jul 2021 18:44:29 +0000 (14:44 -0400)]
Don't save excluded move eval in TT
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17544 W: 1384 L: 1236 D: 14924
Ptnml(0-2): 37, 1031, 6499, 1157, 48
https://tests.stockfishchess.org/tests/view/
60ec8d9bd1189bed71812999
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 26136 W: 823 L: 707 D: 24606
Ptnml(0-2): 6, 643, 11656, 755, 8
https://tests.stockfishchess.org/tests/view/
60ecb11ed1189bed718129ba
closes https://github.com/official-stockfish/Stockfish/pull/3614
Bench:
5505251
Vizvezdenec [Sat, 10 Jul 2021 21:09:15 +0000 (00:09 +0300)]
Remove second futility pruning depth limit
This patch removes futility pruning lmrDepth limit for futility pruning at parent nodes.
Since it's already capped by margin that is a function of lmrDepth there is no need to extra cap it with lmrDepth.
passed STC
https://tests.stockfishchess.org/tests/view/
60e9b5dfd1189bed71812777
LLR: 2.97 (-2.94,2.94) <-2.50,0.50>
Total: 14872 W: 1264 L: 1145 D: 12463
Ptnml(0-2): 37, 942, 5369, 1041, 47
passed LTC
https://tests.stockfishchess.org/tests/view/
60e9c635d1189bed71812790
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 40336 W: 1280 L: 1225 D: 37831
Ptnml(0-2): 24, 1057, 17960, 1094, 33
closes https://github.com/official-stockfish/Stockfish/pull/3612
bench:
5064969
pb00067 [Wed, 7 Jul 2021 12:32:54 +0000 (14:32 +0200)]
SEE: simplify stm variable initialization
Pull #3458 removed the only usage of pos.see_ge() moving pieces that
don't belong to the side to move, so we can simplify this, adding an assert.
closes https://github.com/official-stockfish/Stockfish/pull/3607
No functional change
Vizvezdenec [Tue, 6 Jul 2021 17:44:50 +0000 (20:44 +0300)]
Remove futility pruning depth limit
This patch removes futility pruning depth limit for child node futility pruning.
In current master it was double capped by depth and by futility margin, which is also a function of depth, which didn't make much sense.
passed STC
https://tests.stockfishchess.org/tests/view/
60e2418f9ea99d7c2d693e64
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 116168 W: 9100 L: 9097 D: 97971
Ptnml(0-2): 319, 7496, 42476, 7449, 344
passed LTC
https://tests.stockfishchess.org/tests/view/
60e3374f9ea99d7c2d693f20
LLR: 2.96 (-2.94,2.94) <-2.50,0.50>
Total: 43304 W: 1282 L: 1231 D: 40791
Ptnml(0-2): 8, 1126, 19335, 1173, 10
closes https://github.com/official-stockfish/Stockfish/pull/3606
bench
4965493
SFisGOD [Fri, 2 Jul 2021 22:13:13 +0000 (06:13 +0800)]
Update default net to nn-
9e3c6298299a.nnue
Optimization of nn-
956480d8378f.nnue using SPSA
https://tests.stockfishchess.org/tests/view/
60da2bf63beab81350ac9fe7
Same method as described in PR #3593
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 17792 W: 1525 L: 1372 D: 14895
Ptnml(0-2): 28, 1156, 6401, 1257, 54
https://tests.stockfishchess.org/tests/view/
60deffc59ea99d7c2d693c19
LTC:
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 36544 W: 1245 L: 1109 D: 34190
Ptnml(0-2): 12, 988, 16139, 1118, 15
https://tests.stockfishchess.org/tests/view/
60df11339ea99d7c2d693c22
closes https://github.com/official-stockfish/Stockfish/pull/3601
Bench:
4687476
Paul Mulders [Tue, 29 Jun 2021 09:13:54 +0000 (11:13 +0200)]
Allow passing RTLIB=compiler-rt to make
Not all linux users will have libatomic installed.
When using clang as the system compiler with compiler-rt as the default
runtime library instead of libgcc, atomic builtins may be provided by compiler-rt.
This change allows such users to pass RTLIB=compiler-rt to make sure
the build doesn't error out on the missing (unnecessary) libatomic.
closes https://github.com/official-stockfish/Stockfish/pull/3597
No functional change
candirufish [Thu, 1 Jul 2021 17:51:41 +0000 (19:51 +0200)]
no cut node reduction for killer moves.
stc:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 44344 W: 3474 L: 3294 D: 37576
Ptnml(0-2): 117, 2710, 16338, 2890, 117
https://tests.stockfishchess.org/tests/view/
60d8ea673beab81350ac9eb8
ltc:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 82600 W: 2638 L: 2441 D: 77521
Ptnml(0-2): 38, 2147, 36749, 2312, 54
https://tests.stockfishchess.org/tests/view/
60d9048f3beab81350ac9eed
closes https://github.com/official-stockfish/Stockfish/pull/3600
Bench:
5160239
xoto10 [Wed, 30 Jun 2021 08:22:59 +0000 (09:22 +0100)]
Simplify lazy_skip.
Small speedup by removing operations in lazy_skip.
STC 10+0.1 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 55088 W: 4553 L: 4482 D: 46053
Ptnml(0-2): 163, 3546, 20045, 3637, 153
https://tests.stockfishchess.org/tests/view/
60daa2cb3beab81350aca04d
LTC 60+0.6 :
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 46136 W: 1457 L: 1407 D: 43272
Ptnml(0-2): 10, 1282, 20442, 1316, 18
https://tests.stockfishchess.org/tests/view/
60db0e753beab81350aca08e
closes https://github.com/official-stockfish/Stockfish/pull/3599
Bench
5122403
Stéphane Nicolet [Wed, 23 Jun 2021 07:55:42 +0000 (09:55 +0200)]
Simplify format_cp_aligned_dot()
closes https://github.com/official-stockfish/Stockfish/pull/3583
No functional change
Joost VandeVondele [Sat, 3 Jul 2021 07:20:06 +0000 (09:20 +0200)]
Restore development version
No functional change
Joost VandeVondele [Mon, 28 Jun 2021 19:46:04 +0000 (21:46 +0200)]
Stockfish 14
Official release version of Stockfish 14
Bench:
4770936
---
Today, we have the pleasure to announce Stockfish 14.
As usual, downloads will be freely available at https://stockfishchess.org
The engine is now significantly stronger than just a few months ago,
and wins four times more game pairs than it loses against the previous
release version [0]. Stockfish 14 is now at least 400 Elo ahead of
Stockfish 7, a top engine in 2016 [1]. During the last five years,
Stockfish has thus gained about 80 Elo per year.
Stockfish 14 evaluates positions more accurately than Stockfish 13 as
a result of two major steps forward in defining and training the
efficiently updatable neural network (NNUE) that provides the evaluation
for positions.
First, the collaboration with the Leela Chess Zero team - announced
previously [2] - has come to fruition. The LCZero team has provided a
collection of billions of positions evaluated by Leela that we have
combined with billions of positions evaluated by Stockfish to train the
NNUE net that powers Stockfish 14. The fact that we could use and combine
these datasets freely was essential for the progress made and demonstrates
the power of open source and open data [3].
Second, the architecture of the NNUE network was significantly updated:
the new network is not only larger, but more importantly, it deals better
with large material imbalances and can specialize for multiple phases of
the game [4]. A new project, kick-started by Gary Linscott and
Tomasz Sobczyk, led to a GPU accelerated net trainer written in
pytorch.[5] This tool allows for training high-quality nets in a couple
of hours.
Finally, this release features some search refinements, minor bug
fixes and additional improvements. For example, Stockfish is now about
90 Elo stronger for chess960 (Fischer random chess) at short time control.
The Stockfish project builds on a thriving community of enthusiasts
(thanks everybody!) that contribute their expertise, time, and resources
to build a free and open-source chess engine that is robust, widely
available, and very strong. We invite our chess fans to join the fishtest
testing framework and programmers to contribute to the project on
github [6].
Stay safe and enjoy chess!
The Stockfish team
[0] https://tests.stockfishchess.org/tests/view/
60dae5363beab81350aca077
[1] https://nextchessmove.com/dev-builds
[2] https://stockfishchess.org/blog/2021/stockfish-13/
[3] https://lczero.org/blog/2021/06/the-importance-of-open-data/
[4] https://github.com/official-stockfish/Stockfish/commit/
e8d64af1
[5] https://github.com/glinscott/nnue-pytorch/
[6] https://stockfishchess.org/get-involved/
Brad Knox [Tue, 29 Jun 2021 06:40:16 +0000 (01:40 -0500)]
Update Top CPU Contributors
closes https://github.com/official-stockfish/Stockfish/pull/3595
No functional change
SFisGOD [Mon, 28 Jun 2021 06:58:51 +0000 (14:58 +0800)]
Update default net to nn-
3475407dc199.nnue
Optimization of eight subnetwork output layers of Michael's nn-
190f102a22c3.nnue using SPSA
https://tests.stockfishchess.org/tests/view/
60d5510642a522cc50282ef3
Parameters: A total of 256 net weights and 8 net biases were tuned
New best values: The raw values at the end of the tuning run were used (800k games, 5 seconds TC)
Settings: default ck value and SPSA A is 30,000 (3.75% of the total number of games)
STC:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 29064 W: 2435 L: 2269 D: 24360
Ptnml(0-2): 72, 1857, 10505, 2029, 69
https://tests.stockfishchess.org/tests/view/
60d8ea123beab81350ac9eb6
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 61848 W: 2055 L: 1884 D: 57909
Ptnml(0-2): 18, 1708, 27310, 1861, 27
https://tests.stockfishchess.org/tests/view/
60d8f0393beab81350ac9ec6
closes https://github.com/official-stockfish/Stockfish/pull/3593
Bench:
4770936
MichaelB7 [Sun, 27 Jun 2021 15:26:09 +0000 (11:26 -0400)]
Make net nn-
956480d8378f.nnue the default
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch
python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_18 --resume-from-model ./pt/nn-
75980ca503c6.pt
This run is thus started from a previous master net.
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack
passed STC:
https://tests.stockfishchess.org/tests/view/
60d0c0a7a8ec07dc34c072b2
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 18440 W: 1693 L: 1531 D: 15216
Ptnml(0-2): 67, 1225, 6464, 1407, 57
passed LTC:
https://tests.stockfishchess.org/tests/view/
60d762793beab81350ac9d72
LLR: 2.98 (-2.94,2.94) <0.50,3.50>
Total: 93120 W: 3152 L: 2933 D: 87035
Ptnml(0-2): 48, 2581, 41076, 2814, 41
passed LTC (rebased branch to current master):
https://tests.stockfishchess.org/tests/view/
60d85eeb3beab81350ac9e2b
LLR: 2.96 (-2.94,2.94) <0.50,3.50>
Total: 42688 W: 1347 L: 1206 D: 40135
Ptnml(0-2): 14, 1097, 18981, 1238, 14.
closes https://github.com/official-stockfish/Stockfish/pull/3592
Bench:
4906727
Joost VandeVondele [Wed, 23 Jun 2021 05:23:21 +0000 (07:23 +0200)]
Update WDL model for NNUE
This updates the WDL model based on the LTC statistics in June this year (10M games),
so from pre-NNUE to NNUE based results.
(for old results see, https://github.com/official-stockfish/Stockfish/pull/2778)
As before the fit by the model to the data is quite good.
closes https://github.com/official-stockfish/Stockfish/pull/3582
No functional change
bmc4 [Tue, 22 Jun 2021 22:33:14 +0000 (19:33 -0300)]
Simplify Reductions Initialization
passed
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 45032 W: 3600 L: 3518 D: 37914
Ptnml(0-2): 111, 2893, 16435, 2957, 120
https://tests.stockfishchess.org/tests/view/
60d2655d40925195e7a6c527
LTC:
LLR: 3.00 (-2.94,2.94) <-2.50,0.50>
Total: 25728 W: 786 L: 722 D: 24220
Ptnml(0-2): 5, 650, 11494, 706, 9
https://tests.stockfishchess.org/tests/view/
60d2b14240925195e7a6c577
closes https://github.com/official-stockfish/Stockfish/pull/3584
bench:
4602977
Stéphane Nicolet [Tue, 22 Jun 2021 07:08:37 +0000 (09:08 +0200)]
Detect fortresses a little bit quicker
In the so-called "hybrid" method of evaluation of current master, we use the
classical eval (because of its speed) instead of the NNUE eval when the classical
material balance approximation hints that the position is "winning enough" to
rely on the classical eval.
This trade-off idea between speed and accuracy works well in general, but in
some fortress positions the classical eval is just bad. So in shuffling branches
of the search tree, we (slowly) increase the thresehold so that eventually we
don't trust classical anymore and switch to NNUE evaluation.
This patch increases that threshold faster, so that we switch to NNUE quicker
in shuffling branches. Idea is to incite Stockfish to spend less time in fortresses
lines in the search tree, and spend more time searching the critical lines.
passed STC:
LLR: 2.96 (-2.94,2.94) <-0.50,2.50>
Total: 47872 W: 3908 L: 3720 D: 40244
Ptnml(0-2): 122, 3053, 17419, 3199, 143
https://tests.stockfishchess.org/tests/view/
60cef34b457376eb8bcab79d
passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 73616 W: 2326 L: 2143 D: 69147
Ptnml(0-2): 21, 1940, 32705, 2119, 23
https://tests.stockfishchess.org/tests/view/
60cf6d842114332881e73528
Retested at LTC against lastest master:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 18264 W: 642 L: 532 D: 17090
Ptnml(0-2): 6, 479, 8055, 583, 9
https://tests.stockfishchess.org/tests/view/
60d18cd540925195e7a6c351
closes https://github.com/official-stockfish/Stockfish/pull/3578
Bench:
5139233
MichaelB7 [Mon, 21 Jun 2021 12:10:35 +0000 (08:10 -0400)]
Make net nn-
190f102a22c3.nnue the default net.
Trained with the pytorch trainer: https://github.com/glinscott/nnue-pytorch
python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_17 --resume-from-model ./pt/nn-
75980ca503c6.pt
This run is thus started from the previous master net.
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack
passed LTC
https://tests.stockfishchess.org/tests/view/
60d09f52b4c17000d679517f
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 32184 W: 1100 L: 970 D: 30114
Ptnml(0-2): 10, 878, 14193, 994, 17
passed STC
https://tests.stockfishchess.org/tests/view/
60d086c02114332881e7368e
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 11360 W: 1056 L: 906 D: 9398
Ptnml(0-2): 25, 735, 4026, 853, 41
closes https://github.com/official-stockfish/Stockfish/pull/3576
Bench:
4631244
Joost VandeVondele [Mon, 21 Jun 2021 06:23:50 +0000 (08:23 +0200)]
Fix build error on OSX
directly use integer version for cp calculation.
fixes https://github.com/official-stockfish/Stockfish/issues/3573
closes https://github.com/official-stockfish/Stockfish/pull/3574
No functional change
Stéphane Nicolet [Wed, 16 Jun 2021 05:23:26 +0000 (07:23 +0200)]
Remove the Contempt UCI option
This patch removes the UCI option for setting Contempt in classical evaluation.
It is exactly equivalent to using Contempt=0 for the UCI contempt value and keeping
the dynamic part in the algo (renaming this dynamic part `trend` to better describe
what it does). We have tried quite hard to implement a working Contempt feature for
NNUE but nothing really worked, so it is probably time to give up.
Interested chess fans wishing to keep playing with the UCI option for Contempt and
use it with the classical eval are urged to download the version tagged "SF_Classical"
of Stockfish (dated 31 July 2020), as it was the last version where our search
algorithm was tuned for the classical eval and is probably our strongest classical
player ever: https://github.com/official-stockfish/Stockfish/tags
Passed STC:
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 72904 W: 6228 L: 6175 D: 60501
Ptnml(0-2): 221, 5006, 25971, 5007, 247
https://tests.stockfishchess.org/tests/view/
60c98bf9457376eb8bcab18d
Passed LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 45168 W: 1601 L: 1547 D: 42020
Ptnml(0-2): 38, 1331, 19786, 1397, 32
https://tests.stockfishchess.org/tests/view/
60c9c7fa457376eb8bcab1bb
closes https://github.com/official-stockfish/Stockfish/pull/3575
Bench:
4947716
Stéphane Nicolet [Sun, 20 Jun 2021 08:29:20 +0000 (10:29 +0200)]
Keep more pawns and pieces when attacking
This patch increase the weight of pawns and pieces from 28 to 32
in the scaling formula we apply to the output of the NNUE pure eval.
Increasing this gradient for pawns and pieces means that Stockfish
will try a little harder to keep material when she has the advantage,
and try a little bit harder to escape into an endgame when she is
under pressure.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 53168 W: 4371 L: 4177 D: 44620
Ptnml(0-2): 160, 3389, 19283, 3601, 151
https://tests.stockfishchess.org/tests/view/
60cefd1d457376eb8bcab7ab
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 10888 W: 386 L: 288 D: 10214
Ptnml(0-2): 3, 260, 4821, 356, 4
https://tests.stockfishchess.org/tests/view/
60cf709d2114332881e7352b
closes https://github.com/official-stockfish/Stockfish/pull/3571
Bench:
4965430
MichaelB7 [Sat, 19 Jun 2021 13:57:09 +0000 (09:57 -0400)]
Make net nn-
75980ca503c6.nnue the default.
trained with the Python command
c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_10 --resume-from-model ./pt/nn-
3b20abec10c1.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - making one large Wrong_NNUE 2 binpack and one large Training so the were approximately equal in size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training_Data binpack .
Net nn-
3b20abec10c1.nnue was chosen as the --resume-from-model with the idea that through learning, the manually hex edited values will be learned and will not need to be manually adjusted going forward. They would also be fine tuned by the learning process.
passed STC:
https://tests.stockfishchess.org/tests/view/
60cdf91e457376eb8bcab66f
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18256 W: 1639 L: 1479 D: 15138
Ptnml(0-2): 59, 1179, 6505, 1313, 72
passed LTC:
https://tests.stockfishchess.org/tests/view/
60ce2166457376eb8bcab6e1
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 18792 W: 654 L: 542 D: 17596
Ptnml(0-2): 9, 490, 8291, 592, 14
closes https://github.com/official-stockfish/Stockfish/pull/3570
Bench:
5020972
Tomasz Sobczyk [Thu, 17 Jun 2021 10:36:06 +0000 (12:36 +0200)]
Change trace with NNUE eval support
This patch adds some more output to the `eval` command. It adds a board display
with estimated piece values (method is remove-piece, evaluate, put-piece), and
splits the NNUE evaluation with (psqt,layers) for each bucket for the NNUE net.
Example:
```
./stockfish
position fen 3Qb1k1/1r2ppb1/pN1n2q1/Pp1Pp1Pr/4P2p/4BP2/4B1R1/1R5K b - - 11 40
eval
Contributing terms for the classical eval:
+------------+-------------+-------------+-------------+
| Term | White | Black | Total |
| | MG EG | MG EG | MG EG |
+------------+-------------+-------------+-------------+
| Material | ---- ---- | ---- ---- | -0.73 -1.55 |
| Imbalance | ---- ---- | ---- ---- | -0.21 -0.17 |
| Pawns | 0.35 -0.00 | 0.19 -0.26 | 0.16 0.25 |
| Knights | 0.04 -0.08 | 0.12 -0.01 | -0.08 -0.07 |
| Bishops | -0.34 -0.87 | -0.17 -0.61 | -0.17 -0.26 |
| Rooks | 0.12 0.00 | 0.08 0.00 | 0.04 0.00 |
| Queens | 0.00 0.00 | -0.27 -0.07 | 0.27 0.07 |
| Mobility | 0.84 1.76 | 0.01 0.66 | 0.83 1.10 |
|King safety | -0.99 -0.17 | -0.72 -0.10 | -0.27 -0.07 |
| Threats | 0.27 0.27 | 0.73 0.86 | -0.46 -0.59 |
| Passed | 0.00 0.00 | 0.79 0.82 | -0.79 -0.82 |
| Space | 0.61 0.00 | 0.24 0.00 | 0.37 0.00 |
| Winnable | ---- ---- | ---- ---- | 0.00 -0.03 |
+------------+-------------+-------------+-------------+
| Total | ---- ---- | ---- ---- | -1.03 -2.14 |
+------------+-------------+-------------+-------------+
NNUE derived piece values:
+-------+-------+-------+-------+-------+-------+-------+-------+
| | | | Q | b | | k | |
| | | | +12.4 | -1.62 | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| | r | | | p | p | b | |
| | -3.89 | | | -0.84 | -1.19 | -3.32 | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| p | N | | n | | | q | |
| -1.81 | +3.71 | | -4.82 | | | -5.04 | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| P | p | | P | p | | P | r |
| +1.16 | -0.91 | | +0.55 | +0.12 | | +0.50 | -4.02 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| | | | | P | | | p |
| | | | | +2.33 | | | +1.17 |
+-------+-------+-------+-------+-------+-------+-------+-------+
| | | | | B | P | | |
| | | | | +4.79 | +1.54 | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| | | | | B | | R | |
| | | | | +4.54 | | +6.03 | |
+-------+-------+-------+-------+-------+-------+-------+-------+
| | R | | | | | | K |
| | +4.81 | | | | | | |
+-------+-------+-------+-------+-------+-------+-------+-------+
NNUE network contributions (Black to move)
+------------+------------+------------+------------+
| Bucket | Material | Positional | Total |
| | (PSQT) | (Layers) | |
+------------+------------+------------+------------+
| 0 | + 0.32 | - 1.46 | - 1.13 |
| 1 | + 0.25 | - 0.68 | - 0.43 |
| 2 | + 0.46 | - 1.72 | - 1.25 |
| 3 | + 0.55 | - 1.80 | - 1.25 |
| 4 | + 0.48 | - 1.77 | - 1.29 |
| 5 | + 0.40 | - 2.00 | - 1.60 |
| 6 | + 0.57 | - 2.12 | - 1.54 | <-- this bucket is used
| 7 | + 3.38 | - 2.00 | + 1.37 |
+------------+------------+------------+------------+
Classical evaluation -1.00 (white side)
NNUE evaluation +1.54 (white side)
Final evaluation +2.38 (white side) [with scaled NNUE, hybrid, ...]
```
Also renames the export_net() function to save_eval() while there.
closes https://github.com/official-stockfish/Stockfish/pull/3562
No functional change
proukornew [Fri, 18 Jun 2021 21:52:46 +0000 (00:52 +0300)]
Fix for Cygwin's environment build-profile (fixed)
The Cygwin environment has two g++ compilers, each with a different problem
for compiling Stockfish at the moment:
(a) g++.exe : full posix build compiler, linked to cygwin dll.
=> This one has a problem embedding the net.
(b) x86_64-w64-mingw32-g++.exe : native Windows build compiler.
=> This one manages to embed the net, but has a problem related to libgcov
when we use the profile-build target of Stockfish.
This patch solves the problem for compiler (b), so that our recommended command line
if you want to build an optimized version of Stockfish on Cygwin becomes something
like the following (you can change the ARCH value to whatever you want, but note
the COMP and CXX variables pointing at the right compiler):
```
make -j profile-build ARCH=x86-64-modern COMP=mingw CXX=x86_64-w64-mingw32-c++.exe
```
closes https://github.com/official-stockfish/Stockfish/pull/3569
No functional change
Joost VandeVondele [Fri, 18 Jun 2021 21:50:01 +0000 (23:50 +0200)]
Make net nn-
50144f835024.nnue the default
trained with the Python command
c:\nnue>python train.py i:/bin/all.binpack i:/bin/all.binpack --gpus 1 --threads 4 --num-workers 30 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 3 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed %random%%random% --default_root_dir exp/run_8 --resume-from-model ./pt/nn-
6ad41a9207d0.pt
`
all.binpack equaled 4 parts Wrong_NNUE_2.binpack https://drive.google.com/file/d/1seGNOqcVdvK_vPNq98j-zV3XPE5zWAeq/view?usp=sharing plus two parts of Training_Data.binpack https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
Each set was concatenated together - make one large Wrong_NNUE 2 binpack and one large Training_Data of approximate size. They were then interleaved together. The idea was to give Wrong_NNUE.binpack closer to equal weighting with the Training _Data binpack .
nn-
6ad41a9207d0.pt was derived from a net vondele ran which passed STC quickly,
but faltered in LTC. https://tests.stockfishchess.org/tests/view/
60cba666457376eb8bcab443
STC:
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 18792 W: 2068 L: 1889 D: 14835
Ptnml(0-2): 82, 1480, 6117, 1611, 106
https://tests.stockfishchess.org/tests/view/
60ccda8b457376eb8bcab568
LTC:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 11376 W: 574 L: 454 D: 10348
Ptnml(0-2): 4, 412, 4747, 510, 15
https://tests.stockfishchess.org/tests/view/
60ccf952457376eb8bcab58d
closes https://github.com/official-stockfish/Stockfish/pull/3568
Bench:
4900906
Tomasz Sobczyk [Fri, 18 Jun 2021 10:03:03 +0000 (12:03 +0200)]
Add basic github workflow
move to github actions to replace travis CI.
First version, testing on linux using gcc and clang.
gcc build with sanitizers and valgrind.
No functional change
SFisGOD [Fri, 18 Jun 2021 19:09:20 +0000 (03:09 +0800)]
Update default net to nn-
aa9d7eeb397e.nnue
Optimization of vondele's nn-
33c9d39e5eb6.nnue using SPSA
https://tests.stockfishchess.org/tests/view/
60ca68be457376eb8bcab28b
Setting: ck values are default based on how large the parameters are
The new values for this net are the raw values at the end of the tuning (80k games)
The significant changes are in buckets 1 and 2 (5-12 pieces) so the main difference is in playing endgames if we compare it to nn-33c9. There is also change in bucket 7 (29-32 pieces) but not as substantial as the changes in buckets 1 and 2. If we interpret the changes based on an experiment a few months ago, this new net plays more optimistically during endgames and less optimistically during openings.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 49504 W: 4246 L: 4053 D: 41205
Ptnml(0-2): 140, 3282, 17749, 3407, 174
https://tests.stockfishchess.org/tests/view/
60cbd752457376eb8bcab478
LTC:
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 88720 W: 4926 L: 4651 D: 79143
Ptnml(0-2): 105, 4048, 35793, 4295, 119
https://tests.stockfishchess.org/tests/view/
60cc7828457376eb8bcab4fa
closes https://github.com/official-stockfish/Stockfish/pull/3566
Bench:
4758885
ap [Thu, 17 Jun 2021 23:43:58 +0000 (01:43 +0200)]
New default net nn-
3b20abec10c1.nnue
This net was created by @pleomati, who manually edited with an hex editor
10 values randomly chosen in the LCSFNet10 net (nn-
6ad41a9207d0.nnue) to
create this one. The LCSFNet10 net was trained by Joost VandeVondele from
a dataset combining Stockfish games and Leela games (16x10^9 positions from
SF self-play at depth 9, and 6.3x10^9 positions from Leela games, so overall
72% of Stockfish positions and 28% of Leela positions).
passed STC 10+0.1:
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 50888 W: 5881 L: 5654 D: 39353
Ptnml(0-2): 281, 4290, 16085, 4497, 291
https://tests.stockfishchess.org/tests/view/
60cbfa68457376eb8bcab49a
passed LTC 60+0.6:
LLR: 2.94 (-2.94,2.94) <0.50,3.50>
Total: 25480 W: 1498 L: 1338 D: 22644
Ptnml(0-2): 36, 1155, 10193, 1325, 31
https://tests.stockfishchess.org/tests/view/
60cc4af8457376eb8bcab4d4
closes https://github.com/official-stockfish/Stockfish/pull/3564
Bench:
4904930
Stéphane Nicolet [Thu, 17 Jun 2021 16:09:42 +0000 (18:09 +0200)]
Revert "Fix for Cygwin's environment build-profile"
This reverts commit "Fix for Cygwin's environment build-profile", as it was
giving errors for "make clean" on some Windows environments. See comments in
https://github.com/official-stockfish/Stockfish/commit/
68bf362ea2385a641be9f5ed9ce2acdf55a1ecf1
Possibly somebody can propose a solution that would fix Cygwin builds and
not break on other system too, stay tuned! :-)
No functional change
bmc4 [Tue, 15 Jun 2021 23:56:09 +0000 (20:56 -0300)]
Simplify reduction when best move doesn't change frequently.
STC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 40400 W: 3468 L: 3377 D: 33555
Ptnml(0-2): 134, 2734, 14388, 2795, 149
https://tests.stockfishchess.org/tests/view/
60c93e5a457376eb8bcab15f
LTC:
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 34200 W: 1190 L: 1128 D: 31882
Ptnml(0-2): 22, 998, 15001, 1054, 25
https://tests.stockfishchess.org/tests/view/
60c96a1a457376eb8bcab180
closes https://github.com/official-stockfish/Stockfish/pull/3559
bench:
5629669
proukornew [Thu, 13 May 2021 21:49:28 +0000 (00:49 +0300)]
Fix for Cygwin's environment build-profile
The Cygwin environment has two g++ compilers, each with a different problem
for compiling Stockfish at the moment:
(a) g++.exe : full posix build compiler, linked to cygwin dll.
=> This one has a problem embedding the net.
(b) x86_64-w64-mingw32-g++.exe : native Windows build compiler.
=> This one manages to embed the net, but has a problem related to libgcov
when we use the profile-build target of Stockfish.
This patch solves the problem for compiler (b), so that our recommended command line
if you want to build an optimized version of Stockfish on Cygwin becomes something
like the following (you can change the ARCH value to whatever you want, but note
the COMP and CXX variables pointing at the right compiler):
```
make -j profile-build ARCH=x86-64-modern COMP=mingw CXX=x86_64-w64-mingw32-c++.exe
```
closes https://github.com/official-stockfish/Stockfish/pull/3463
No functional change
Joost VandeVondele [Tue, 15 Jun 2021 10:49:23 +0000 (12:49 +0200)]
New default net nn-
33c9d39e5eb6.nnue
As the previous net, this net is trained on Leela games as provided by borg.
See also https://lczero.org/blog/2021/06/the-importance-of-open-data/
The particular data set, which is a mix of T60 and T74 data, is now available as a single binpack:
https://drive.google.com/file/d/1RFkQES3DpsiJqsOtUshENtzPfFgUmEff/view?usp=sharing
The training command was:
python train.py ../../training_data_pylon.binpack ../../training_data_pylon.binpack --gpus 1 --threads 2 --num-workers 2 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --random-fen-skipping 10 --features=HalfKAv2^ --lambda=1.0 --max_epochs=440 --seed $RANDOM --default_root_dir exp/run_2
passed STC:
https://tests.stockfishchess.org/tests/view/
60c887cb457376eb8bcab054
LLR: 2.94 (-2.94,2.94) <-0.50,2.50>
Total: 12792 W: 1483 L: 1311 D: 9998
Ptnml(0-2): 62, 989, 4131, 1143, 71
passed LTC:
https://tests.stockfishchess.org/tests/view/
60c8e5c4457376eb8bcab0f0
LLR: 2.95 (-2.94,2.94) <0.50,3.50>
Total: 11272 W: 601 L: 477 D: 10194
Ptnml(0-2): 9, 421, 4657, 535, 14
also had strong LTC performance against another strong net of the series:
https://tests.stockfishchess.org/tests/view/
60c8c40d457376eb8bcab0c6
closes https://github.com/official-stockfish/Stockfish/pull/3557
Bench:
5032320
J. Oster [Mon, 14 Jun 2021 15:28:30 +0000 (17:28 +0200)]
Fix a rare case of wrong TB ranking
of a root move leading to a 3-fold repetition.
With this small fix a draw ranking and thus a draw score is being applied.
This works for both, ranking by dtz or wdl tables.
Fixes https://github.com/official-stockfish/Stockfish/issues/3542
(No functional change without TBs.)
Bench:
4877339
Tomasz Sobczyk [Sat, 12 Jun 2021 18:45:14 +0000 (20:45 +0200)]
Reduce the number of accumulator states
Reduce from 3 to 2. Make the intent of the states clearer.
STC: https://tests.stockfishchess.org/tests/view/
60c50111457376eb8bcaad03
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 61888 W: 5007 L: 4944 D: 51937
Ptnml(0-2): 164, 3947, 22649, 4030, 154
LTC: https://tests.stockfishchess.org/tests/view/
60c52b1c457376eb8bcaad2c
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 20248 W: 688 L: 618 D: 18942
Ptnml(0-2): 7, 551, 8946, 605, 15
closes https://github.com/official-stockfish/Stockfish/pull/3548
No functional change.
JWmer [Sun, 13 Jun 2021 21:48:32 +0000 (23:48 +0200)]
Update default net to nn-
8e47cf062333.nnue
This net is the result of training on data used by the Leela project. More precisely,
we shuffled T60 and T74 data kindly provided by borg (for different Tnn, the data is
a result of Leela selfplay with differently sized Leela nets).
The data is available at vondele's google drive:
https://drive.google.com/drive/folders/1mftuzYdl9o6tBaceR3d_VBQIrgKJsFpl.
The Leela data comes in small chunks of .binpack files. To shuffle them, we simply
used a small python script to randomly rename the files, and then concatenated them
using `cat`. As validation data we picked a file of T60 data. We will further investigate
T74 data.
The training for the NNUE architecture used 200 epochs with the Python trainer from
the Stockfish project. Unlike the previous run we tried with this data, this run does
not have adjusted scaling — not because we didn't want to, but because we forgot.
However, this training randomly skips 40% more positions than previous run. The loss
was very spiky and decreased slower than it does usually.
Training loss: https://github.com/official-stockfish/images/blob/main/training-loss-
8e47cf062333.png
Validation loss: https://github.com/official-stockfish/images/blob/main/validation-loss-
8e47cf062333.png
This is the exact training command:
python train.py --smart-fen-skipping --random-fen-skipping 14 --batch-size 16384 --threads 4 --num-workers 4 --gpus 1 trainingdata\training_data.binpack validationdata\val.binpack
---
10k STC result:
ELO: 3.61 +-3.3 (95%) LOS: 98.4%
Total: 10000 W: 1241 L: 1137 D: 7622
Ptnml(0-2): 68, 841, 3086, 929, 76
https://tests.stockfishchess.org/tests/view/
60c67e50457376eb8bcaae70
10k LTC result:
ELO: 2.71 +-2.4 (95%) LOS: 98.8%
Total: 10000 W: 659 L: 581 D: 8760
Ptnml(0-2): 22, 485, 3900, 579, 14
https://tests.stockfishchess.org/tests/view/
60c69deb457376eb8bcaae98
Passed LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 9648 W: 685 L: 545 D: 8418
Ptnml(0-2): 22, 448, 3740, 596, 18
https://tests.stockfishchess.org/tests/view/
60c6d41c457376eb8bcaaecf
---
closes https://github.com/official-stockfish/Stockfish/pull/3550
Bench:
4877339
Tomasz Sobczyk [Thu, 10 Jun 2021 15:43:42 +0000 (17:43 +0200)]
Register count for feature transformer
Compute optimal register count for feature transformer accumulation dynamically.
This also introduces a change where AVX512 would only use 8 registers instead of 16
(now possible due to a 2x increase in feature transformer size).
closes https://github.com/official-stockfish/Stockfish/pull/3543
No functional change
Vizvezdenec [Sat, 29 May 2021 03:39:14 +0000 (06:39 +0300)]
Do less LMR extensions
This patch restricts LMR extensions (of non-transposition table moves) from being
used when the transposition table move was extended by two plies via singular
extension. This may serve to limit search explosions in certain positions.
This makes a lot of sense because the precondition for the tt-move to have been
singular extended by two plies is that the result of the alternate search (with
excluded the tt-move) has been a hard fail low: it is natural to later search less
for non tt-moves in this situation.
The current state of depth/extensions/reductions management is getting quite tricky
in our search algo, see https://github.com/official-stockfish/Stockfish/pull/3546#issuecomment-
860174549
for some discussion. Suggestions welcome!
Passed STC
https://tests.stockfishchess.org/tests/view/
60c3f293457376eb8bcaac8d
LLR: 2.95 (-2.94,2.94) <-0.50,2.50>
Total: 117984 W: 9698 L: 9430 D: 98856
Ptnml(0-2): 315, 7708, 42703, 7926, 340
passed LTC
https://tests.stockfishchess.org/tests/view/
60c46ea5457376eb8bcaacc7
LLR: 2.97 (-2.94,2.94) <0.50,3.50>
Total: 11280 W: 401 L: 302 D: 10577
Ptnml(0-2): 2, 271, 4998, 364, 5
closes https://github.com/official-stockfish/Stockfish/pull/3546
Bench:
4709974
Stéphane Nicolet [Sun, 13 Jun 2021 07:59:34 +0000 (09:59 +0200)]
Clarify use of UCI options
Update README.md to clarify use of UCI options
closes https://github.com/official-stockfish/Stockfish/pull/3540
No functional change
Tomasz Sobczyk [Wed, 9 Jun 2021 09:21:55 +0000 (11:21 +0200)]
Read NNUE net faster
Load feature transformer weights in bulk on little-endian machines.
This is in particular useful to test new nets with c-chess-cli,
see https://github.com/lucasart/c-chess-cli/issues/44
```
$ time ./stockfish.exe uci
Before : 0m0.914s
After : 0m0.483s
```
No functional change
Joost VandeVondele [Wed, 9 Jun 2021 21:23:13 +0000 (23:23 +0200)]
Limit double extensions
Double extensions can lead to search explosions, for specific positions.
Currently, however, these double extensions are worth about 10Elo and cannot
be removed. This patch instead limits the number of double extensions given
to a maximum of 3.
This fixes https://github.com/official-stockfish/Stockfish/issues/3532
where the following testcase was shown to be problematic:
```
uci
setoption name Hash value 4
setoption name Contempt value 0
ucinewgame
position fen 8/Pk6/8/1p6/8/P1K5/8/6B1 w - - 37 130
go depth 20
```
passed STC:
https://tests.stockfishchess.org/tests/view/
60c13161457376eb8bcaaa0f
LLR: 2.95 (-2.94,2.94) <-2.50,0.50>
Total: 73256 W: 6114 L: 6062 D: 61080
Ptnml(0-2): 222, 4912, 26306, 4968, 220
passed LTC:
https://tests.stockfishchess.org/tests/view/
60c196fb457376eb8bcaaa6b
LLR: 2.94 (-2.94,2.94) <-2.50,0.50>
Total: 166440 W: 5559 L: 5594 D: 155287
Ptnml(0-2): 106, 4921, 73197, 4894, 102
closes https://github.com/official-stockfish/Stockfish/pull/3544
Bench:
5067605
bmc4 [Mon, 7 Jun 2021 18:47:37 +0000 (15:47 -0300)]
Simplify promotion move generator
This patch removes Knight promotion checks from Captures. As a consequence,
it also removes this underpromotion from qsearch.
STC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 37776 W: 3113 L: 3023 D: 31640
Ptnml(0-2): 103, 2419, 13755, 2507, 104
https://tests.stockfishchess.org/tests/view/
60be6a06457376eb8bcaa775
LTC:
LLR: 2.93 (-2.94,2.94) <-2.50,0.50>
Total: 39760 W: 1257 L: 1203 D: 37300
Ptnml(0-2): 11, 1079, 17646, 1133, 11
https://tests.stockfishchess.org/tests/view/
60beb972457376eb8bcaa7c5
closes https://github.com/official-stockfish/Stockfish/pull/3536
Bench:
5530620
bmc4 [Sun, 6 Jun 2021 16:31:57 +0000 (13:31 -0300)]
Reduce in LMR reduction on PvNode
reduce reduction in LMR by 1 on PvNode.
STC:
LLR: 2.93 (-2.94,2.94) <-0.50,2.50>
Total: 266080 W: 22438 L: 21996 D: 221646
Ptnml(0-2): 774, 17874, 95376, 18168, 848
https://tests.stockfishchess.org/tests/view/
60bc0661457376eb8bcaa4bb
LTC:
LLR: 2.93 (-2.94,2.94) <0.50,3.50>
Total: 20144 W: 698 L: 587 D: 18859
Ptnml(0-2): 2, 529, 8906, 626, 9
https://tests.stockfishchess.org/tests/view/
60bcc3f2457376eb8bcaa58d
closes https://github.com/official-stockfish/Stockfish/pull/3534
bench:
5173012
Guy Vreuls [Thu, 3 Jun 2021 14:46:05 +0000 (16:46 +0200)]
Makefile: Extend sanitize support
Enable compiling with multiple sanitizers at once.
Syntax:
make build ARCH=x86-64-avx512 debug=on sanitize="address undefined"
closes https://github.com/official-stockfish/Stockfish/pull/3524
No functional change.
Joost VandeVondele [Thu, 3 Jun 2021 17:18:24 +0000 (19:18 +0200)]
Enhance CI to error on leaks
Add flags to valgrind in our Continuous Integration scripts,
to error on memory leaks.
closes https://github.com/official-stockfish/Stockfish/pull/3525
No functional change.