and should have nearly no influence at STC as depth 27 is rarely reached.
It was noticed that initializing the threshold with MAX_PLY, had an adverse effect,
possibly because the first move is sensitive to this.
Tomasz Sobczyk [Fri, 13 May 2022 15:26:50 +0000 (17:26 +0200)]
Update NNUE architecture to SFNNv5. Update network to nn-3c0aa92af1da.nnue.
Architecture changes:
Duplicated activation after the 1024->15 layer with squared crelu (so 15->15*2). As proposed by vondele.
Trainer changes:
Added bias to L1 factorization, which was previously missing (no measurable improvement but at least neutral in principle)
For retraining linearly reduce lambda parameter from 1.0 at epoch 0 to 0.75 at epoch 800.
reduce max_skipping_rate from 15 to 10 (compared to vondele's outstanding PR)
Note: This network was trained with a ~0.8% error in quantization regarding the newly added activation function.
This will be fixed in the released trainer version. Expect a trainer PR tomorrow.
Note: The inference implementation cuts a corner to merge results from two activation functions.
This could possibly be resolved nicer in the future. AVX2 implementation likely not necessary, but NEON is missing.
train a net using training data with a
heavier weight on positions having 16 pieces on the board. More specifically,
with a relative weight of `i * (32-i)/(16 * 16)+1` (where i is the number of pieces on the board).
This is done with the trainer branch https://github.com/glinscott/nnue-pytorch/pull/173
The command used is:
```
python train.py $datafile $datafile $restarttype $restartfile --gpus 1 --threads 4 --num-workers 12 --random-fen-skipping=3 --batch-size 16384 --progress_bar_refresh_rate 300 --smart-fen-skipping --features=HalfKAv2_hm^ --lambda=1.00 --max_epochs=$epochs --seed $RANDOM --default_root_dir exp/run_$i
```
The datafile is T60T70wIsRightFarseerT60T74T75T76.binpack, the restart is from the master net.
A new major release of Stockfish is now available at https://stockfishchess.org
Stockfish 15 continues to push the boundaries of chess, providing unrivalled
analysis and playing strength. In our testing, Stockfish 15 is ahead of
Stockfish 14 by 36 Elo points and wins nine times more game pairs than it
loses[1].
Improvements to the engine have made it possible for Stockfish to end up
victorious in tournaments at all sorts of time controls ranging from bullet to
classical and even at Fischer random chess[2]. At CCC, Stockfish won all of
the latest tournaments: CCC 16 Bullet, Blitz and Rapid, CCC 960 championship,
and the CCC 17 Rapid. At TCEC, Stockfish won the Season 21, Cup 9, FRC 4 and
in the current Season 22 superfinal, at the time of writing, has won 16 game
pairs and not yet lost a single one.
This progress is the result of a dedicated team of developers that comes up
with new ideas and improvements. For Stockfish 15, we tested nearly 13000
different changes and retained the best 200. These include the fourth
generation of our NNUE network architecture, as well as various search
improvements. To perform these tests, contributors provide CPU time for
testing, and in the last year, they have collectively played roughly a
billion chess games. In the last few years, our distributed testing
framework, Fishtest, has been operated superbly and has been developed and
improved extensively. This work by Pasquale Pigazzini, Tom Vijlbrief, Michel
Van den Bergh, and various other developers[3] is an essential part of the
success of the Stockfish project.
Indeed, the Stockfish project builds on a thriving community of enthusiasts
to offer a free and open-source chess engine that is robust, widely
available, and very strong. We invite our chess fans to join the Fishtest
testing framework and programmers to contribute to the project[4].
This patch lessens the Late Move Reduction at PV nodes with low depth. Previously the affect of depth on LMR was independant of nodeType. The idea behind this patch is that at PV nodes, LMR at low depth is will miss out on potential alpha-raising moves.
This patch enforces that NNUE evaluation is used for endgame positions at shallow depth (depth <= 9).
Classic evaluation will still be used for high imbalance positions when the depth is high or there are many pieces.
Topologist [Mon, 28 Mar 2022 09:50:08 +0000 (11:50 +0200)]
Play more positional in endgames
This patch chooses the delta value (which skews the nnue evaluation between positional and materialistic)
depending on the material: If the material is low, delta will be higher and the evaluation is shifted
to the positional value. If the material is high, the evaluation will be shifted to the psqt value.
I don't think slightly negative values of delta should be a concern.
Michael Chaly [Mon, 28 Mar 2022 11:15:56 +0000 (14:15 +0300)]
In movepicker increase priority for moves that evade a capture
This idea is a mix of koivisto idea of threat history and heuristic that
was simplified some time ago in LMR - decreasing reduction for moves that evade a capture.
Instead of doing so in LMR this patch does it in movepicker - to do this it
calculates squares that are attacked by different piece types and pieces that are located
on this squares and boosts up weight of moves that make this pieces land on a square that is not under threat.
Boost is greater for pieces with bigger material values.
Special thanks to koivisto and seer authors for explaining me ideas behind threat history.
This patch replaces `pos.capture_or_promotion()` with `pos.capture()`
and comes after a few attempts with elo-gaining bounds, two of which
failed yellow at LTC
(https://tests.stockfishchess.org/tests/view/622f8f0cc9e950cbfc237024
and
https://tests.stockfishchess.org/tests/view/62319a8bb3b498ba71a6b2dc).
Via the ttPv flag an implicit tree of current and former PV nodes is maintained. In addition this tree is grown or shrinked at the leafs dependant on the search results. But now the shrinking step has been removed.
As the frequency of ttPv nodes decreases with depth the shown scaling behavior (STC barely passed but LTC scales well) of the tests was expected.
Michael Chaly [Tue, 8 Mar 2022 07:56:07 +0000 (10:56 +0300)]
Decrease reductions in Lmr for some Pv nodes
This patch makes us reduce less in Lmr at pv nodes in case of static eval being far away from static evaluation of position.
Idea is that if it's the case then probably position is pretty complex so we can't be sure about how reliable LMR is so we need to reduce less.
This patch (partially) sort captures in analogy to quiet moves. All
three movepickers are affected, hence `depth` is added as an argument in
probcut's.
mstembera [Thu, 24 Feb 2022 02:19:36 +0000 (18:19 -0800)]
Clean up and simplify some nnue code.
Remove some unnecessary code and it's execution during inference. Also the change on line 49 in nnue_architecture.h results in a more efficient SIMD code path through ClippedReLU::propagate().
Michael Chaly [Sat, 19 Feb 2022 15:24:11 +0000 (18:24 +0300)]
Adjust usage of LMR for 2nd move in move ordering
Current master prohibits usage of LMR for 2nd move at rootNode. This patch also disables LMR for 2nd move not only at rootNode but also at first PvNode that is a reply to rootNode.
ppigazzini [Sun, 6 Feb 2022 18:20:30 +0000 (19:20 +0100)]
Add ARM NDK to Github Actions matrix
- set the variable only for the required tests to keep simple the yml file
- use NDK 21.x until will be fixed the Stockfish static build problem
with NDK 23.x
- set the test for armv7, armv7-neon, armv8 builds:
- use armv7a-linux-androideabi21-clang++ compiler for armv7 armv7-neon
- enforce a static build
- silence the Warning for the unused compilation flag "-pie" with
the static build, otherwise the Github workflow stops
- use qemu to bench the build and get the signature
Many thanks to @pschneider1968 that made all the hard work with NDK :)
Michael Chaly [Thu, 17 Feb 2022 07:54:07 +0000 (10:54 +0300)]
Tune search at very long time control
This patch is a result of tuning done by user @candirufish after 150k games.
Since the tuned values were really interesting and touched heuristics
that are known for their non-linear scaling I decided to run limited
games LTC match, even if the STC test was really bad (which was expected).
After seeing the results of the LTC match, I also run a VLTC (very long
time control) SPRTtest, which passed.
The main difference is in extensions: this patch allows much more
singular/double extensions, both in terms of allowing them at lower
depths and with lesser margins.
Michael Chaly [Sat, 12 Feb 2022 17:08:45 +0000 (20:08 +0300)]
Big search tuning (version 2)
One more tuning - this one includes newly introduced heuristics and
some other parameters that were not included in previous one. Result
of 400k games at 20+0.2 "as is". Tuning is continuing since there is
probably a lot more elo to gain.
The most important architectural changes are the following:
* 1024x2 [activated] neurons are pairwise, elementwise multiplied (not quite pairwise due to implementation details, see diagram), which introduces a non-linearity that exhibits similar benefits to previously tested sigmoid activation (quantmoid4), while being slightly faster.
* The following layer has therefore 2x less inputs, which we compensate by having 2 more outputs. It is possible that reducing the number of outputs might be beneficial (as we had it as low as 8 before). The layer is now 1024->16.
* The 16 outputs are split into 15 and 1. The 1-wide output is added to the network output (after some necessary scaling due to quantization differences). The 15-wide is activated and follows the usual path through a set of linear layers. The additional 1-wide output is at least neutral, but has shown a slightly positive trend in training compared to networks without it (all 16 outputs through the usual path), and allows possibly an additional stage of lazy evaluation to be introduced in the future.
Additionally, the inference code was rewritten and no longer uses a recursive implementation. This was necessitated by the splitting of the 16-wide intermediate result into two, which was impossible to do with the old implementation with ugly hacks. This is hopefully overall for the better.
First session:
The first session was training a network from scratch (random initialization). The exact trainer used was slightly different (older) from the one used in the second session, but it should not have a measurable effect. The purpose of this session is to establish a strong network base for the second session. Small deviations in strength do not harm the learnability in the second session.
The training was done using the following command:
Every 20th net was saved and its playing strength measured against some baseline at 25k nodes per move with pure NNUE evaluation (modified binary). The exact setup is not important as long as it's consistent. The purpose is to sift good candidates from bad ones.
The dataset can be found https://drive.google.com/file/d/1UQdZN_LWQ265spwTBwDKo0t1WjSJKvWY/view
Second session:
The second training session was done starting from the best network (as determined by strength testing) from the first session. It is important that it's resumed from a .pt model and NOT a .ckpt model. The conversion can be performed directly using serialize.py
The LR schedule was modified to use gamma=0.995 instead of gamma=0.992 and LR=4.375e-4 instead of LR=8.75e-4 to flatten the LR curve and allow for longer training. The training was then running for 800 epochs instead of 400 (though it's possibly mostly noise after around epoch 600).
The training was done using the following command:
The training was done using the following command:
In particular note that we now use lambda=1.0 instead of lambda=0.8 (previous nets), because tests show that WDL-skipping introduced by vondele performs better with lambda=1.0. Nets were being saved every 20th epoch. In total 16 runs were made with these settings and the best nets chosen according to playing strength at 25k nodes per move with pure NNUE evaluation - these are the 4 nets that have been put on fishtest.
The dataset can be found either at ftp://ftp.chessdb.cn/pub/sopel/data_sf/T60T70wIsRightFarseerT60T74T75T76.binpack in its entirety (download might be painfully slow because hosted in China) or can be assembled in the following way:
Get the https://github.com/official-stockfish/Stockfish/blob/5640ad48ae5881223b868362c1cbeb042947f7b4/script/interleave_binpacks.py script.
Download T60T70wIsRightFarseer.binpack https://drive.google.com/file/d/1_sQoWBl31WAxNXma2v45004CIVltytP8/view
Download farseerT74.binpack http://trainingdata.farseer.org/T74-May13-End.7z
Download farseerT75.binpack http://trainingdata.farseer.org/T75-June3rd-End.7z
Download farseerT76.binpack http://trainingdata.farseer.org/T76-Nov10th-End.7z
Run python3 interleave_binpacks.py T60T70wIsRightFarseer.binpack farseerT74.binpack farseerT75.binpack farseerT76.binpack T60T70wIsRightFarseerT60T74T75T76.binpack
with some hand polishing on top of it, which includes :
a) correcting trend sigmoid - for some reason original tuning resulted in it being negative. This heuristic was proven to be worth some elo for years so reversing it sign is probably some random artefact;
b) remove changes to continuation history based pruning - this heuristic historically was really good at providing green STCs and then failing at LTC miserably if we tried to make it more strict, original tuning was done at short time control and thus it became more strict - which doesn't scale to longer time controls;
c) remove changes to improvement - not really indended :).
Michael Chaly [Mon, 7 Feb 2022 10:32:21 +0000 (13:32 +0300)]
Do less depth reduction in null move pruning for complex positions
This patch makes us reduce less depth in null move pruning if complexity is high enough.
Thus, null move pruning now depends in two distinct ways on complexity,
while being the only search heuristic that exploits complexity so far.
Michael Chaly [Sat, 5 Feb 2022 01:03:02 +0000 (04:03 +0300)]
Reintroduce razoring
Razoring was simplified away some years ago, this patch reintroduces it in a slightly different form.
Now for low depths if eval is far below alpha we check if qsearch can push it above alpha - and if it can't we return a fail low.
Michael Chaly [Fri, 4 Feb 2022 19:42:41 +0000 (22:42 +0300)]
Introduce movecount pruning for quiet check evasions in qsearch
Idea of this patch is that we usually don't consider quiet check evasions as "good" ones and prefer capture based ones instead. So it makes sense to think that if in qsearch 2 quiet check evasions failed to produce anything good 3rd and further ones wouldn't be good either.
Michael Chaly [Sat, 29 Jan 2022 03:39:40 +0000 (06:39 +0300)]
Do stats updates after LMR for captures
Since captures that are in LMR use continuation histories of corresponding quiet moves it makes sense to update this histories if this capture passes LMR by analogy to existing logic for quiet moves.
pschneider1968 [Fri, 21 Jan 2022 13:11:53 +0000 (14:11 +0100)]
Fix Makefile for Android NDK cross-compile
For cross-compiling to Android on windows, the Makefile needs some tweaks.
Tested with Android NDK 23.1.7779620 and 21.4.7075529, using
Windows 10 with clean MSYS2 environment (i.e. no MINGW/GCC/Clang
toolchain in PATH) and Fedora 35, with build target:
build ARCH=armv8 COMP=ndk
The resulting binary runs fine inside Droidfish on my Samsung
Galaxy Note20 Ultra and Samsung Galaxy Tab S7+
Other builds tested to exclude regressions: MINGW64/Clang64 build
on Windows; MINGW64 cross build, native Clang and GCC builds on Fedora.
wiki docs https://github.com/glinscott/fishtest/wiki/Cross-compiling-Stockfish-for-Android-on-Windows-and-Linux
J. Oster [Sat, 15 Jan 2022 17:18:52 +0000 (18:18 +0100)]
Simplify limiting extensions.
Replace the current method for limiting extensions to avoid search getting stuck
with a much simpler method.
the test position in https://github.com/official-stockfish/Stockfish/commit/73018a03375b4b72ee482eb5a4a2152d7e4f0aac
can still be searched without stuck search.
fixes #3815 where the search now makes progress with rootDepth
shows robust behavior in a d10 search for 1M positions.
ppigazzini [Mon, 17 Jan 2022 13:03:16 +0000 (14:03 +0100)]
Improve Makefile for Windows native builds
A Windows Native Build (WNB) can be done:
- on Windows, using a recent mingw-w64 g++/clang compiler
distributed by msys2, cygwin and others
- on Linux, using mingw-w64 g++ to cross compile
Improvements:
- check for a WNB in a proper way and set a variable to simplify the code
- set the proper EXE for a WNB
- use the proper name for the mingw-w64 clang compiler
- use the static linking for a WNB
- use wine to make a PGO cross compile on Linux (also with Intel SDE)
- enable the LTO build for mingw-w64 g++ compiler
- set `lto=auto` to use the make's job server, if available, or otherwise
to fall back to autodetection of the number of CPU threads
- clean up all the temporary LTO files saved in the local directory
Rui Coelho [Mon, 17 Jan 2022 16:51:20 +0000 (16:51 +0000)]
Use average complexity for time management
This patch is a variant of the idea by locutus2 (https://tests.stockfishchess.org/tests/view/61e1f24cb1f9959fe5d88168) to adjust the total time depending on the average complexity of the position.
Rui Coelho [Thu, 13 Jan 2022 18:30:53 +0000 (18:30 +0000)]
Use complexity in search
This patch uses the complexity measure (from #3875) as a heuristic for null move pruning.
Hopefully, there may be room to use it in other pruning techniques.
I would like to thank vondele and locutus2 for the feedback and suggestions during testing.
lonfom169 [Sat, 1 Jan 2022 17:09:08 +0000 (14:09 -0300)]
Simplify away rangeReduction
Remove rangeReduction, introduced in [#3717](https://github.com/official-stockfish/Stockfish/pull/3717),
as it seemingly doesn't bring enough ELO anymore. It might be interesting to add
new forms of reduction or tune the reduction formula in the future.
lonfom169 [Fri, 31 Dec 2021 01:50:20 +0000 (22:50 -0300)]
Smooth out doDeeperSearch
Adjust threshold based on the difference between newDepth and LMR depth.
With more reduction, bigger fail-high is required in order to perform the deeper search.
Stéphane Nicolet [Thu, 30 Dec 2021 10:23:40 +0000 (11:23 +0100)]
Tweak optimism with complexity
This patch increases the optimism bonus for "complex positions", where the
complexity is measured as the absolute value of the difference between material
and the sophisticated NNUE evaluation (idea by Joost VandeVondele).
Also rename some variables in evaluate() while there.
Closes https://github.com/official-stockfish/Stockfish/pull/3875
Follow-up from https://github.com/official-stockfish/Stockfish/commit/a5a89b27c8e3225fb453d603bc4515d32bb351c3
Michael Chaly [Wed, 22 Dec 2021 04:54:10 +0000 (07:54 +0300)]
Fall back to NNUE if classical evaluation is much lower than threshold
The idea is that if classical eval returns a value much lower than the threshold of
its usage it most likely means that position isn't that simple
so we need the more precise NNUE evaluation.
bmc4 [Mon, 20 Dec 2021 19:20:08 +0000 (16:20 -0300)]
Update Elo estimates for terms in search
This updates estimates from 2yr ago #2401, and adds missing terms.
All tests run at 10+0.1 (STC), 20000 games, error bars +- 1.8 Elo, book 8moves_v3.png.
A table of Elo values with the links to the corresponding tests can be found at the PR
Michael Chaly [Sat, 18 Dec 2021 23:30:45 +0000 (02:30 +0300)]
Reintroduce futility pruning for captures
This is a reintroduction of an idea that was simplified away approximately 1 year ago.
There are some tweaks to it :
a) exclude promotions;
b) exclude Pv Nodes from it - Pv Nodes logic for captures is really different from non Pv nodes so it makes a lot of sense;
c) use a big grain of capture history - idea is taken from my recent patches in futility pruning.
Michael Chaly [Sat, 18 Dec 2021 08:51:19 +0000 (11:51 +0300)]
Adjust reductions based on current node delta and root delta
This patch is a follow up of previous 2 patches that introduced more reductions for PV nodes with low delta and more pruning for nodes with low delta. Instead of writing separate heuristics now it adjust reductions based on delta / rootDelta - it allows to remove 3 separate adjustements of pruning/LMR in different places and also makes reduction dependence on delta and rootDelta smoother. Also now it works for all pruning heuristics and not just 2.
George Sobala [Mon, 13 Dec 2021 15:29:31 +0000 (15:29 +0000)]
Fix for profile-build failure using gcc on MacOS
Fixes https://github.com/official-stockfish/Stockfish/issues/3846 ,
where the profiling SF binary generated by GCC on MacOS would launch
but failed to quit. Tested with gcc-8, gcc9, gcc10, gcc-11.
The problem can be fixed by adding -fvisibility=hidden to the compiler
flags, see for example the following piece of Apple documentation:
https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/CppRuntimeEnv/Articles/SymbolVisibility.html
For instance this now works:
make -j8 profile-build ARCH=x86-64-avx2 COMP=gcc COMPCXX=g++-11
bmc4 [Tue, 14 Dec 2021 15:00:45 +0000 (12:00 -0300)]
Simplify away singularQuietLMR
While at it, we also update the Elo estimate of reduction at non-PV nodes
(source: https://tests.stockfishchess.org/tests/view/61acf97156fcf33bce7d6303 )
Using data T60 12/1/20 to 11/2/2021, T74 4/22/21 to 7/27/21, T75 6/3/21 to 10/16/21, T76
(half of the randomly interleaved dataset due to a mistake merging) 11/10/21 to 11/21/21,
wrongIsRight_nodes5000pv2.binpack, and WrongIsRight-Reloaded.binpack combined and shuffled
position by position.
Trained with LR=4.375e-4 and WDL filtering enabled:
remove pawns scaling, probably correlated with piece scaling, and might be less useful with the recent improved nets. Might allow for another tune of the scaling params.
Remove the difference to previous best score in falling eval calculation. As compensation double the effect of the difference to previous best average score.
Michael Chaly [Thu, 9 Dec 2021 16:33:06 +0000 (19:33 +0300)]
Adjust singular extension depth restriction
This patch is a modification of original idea by lonfom169 which had a good yellow run
- do singular extension search with depth threshold 6 unless this is a PvNode with is a part of a PV line -
for them set threshold to 8 instead.
Michael Chaly [Tue, 7 Dec 2021 16:46:21 +0000 (19:46 +0300)]
Introduce post-lmr extensions
This idea is somewhat similar to extentions in LMR but has a different flavour.
If result of LMR was really good - thus exceeded alpha by some pretty
big given margin, we can extend move after LMR in full depth search with 0 window.
The idea is that this move is probably a fail high with somewhat of a big
probability so extending it makes a lot of sense
Tomasz Sobczyk [Thu, 2 Dec 2021 11:29:11 +0000 (12:29 +0100)]
Optimize FT activation and affine transform for NEON.
This patch optimizes the NEON implementation in two ways.
The activation layer after the feature transformer is rewritten to make it easier for the compiler to see through dependencies and unroll. This in itself is a minimal, but a positive improvement. Other architectures could benefit from this too in the future. This is not an algorithmic change.
The affine transform for large matrices (first layer after FT) on NEON now utilizes the same optimized code path as >=SSSE3, which makes the memory accesses more sequential and makes better use of the available registers, which allows for code that has longer dependency chains.
Benchmarks from Redshift#161, profile-build with apple clang
Michael Chaly [Mon, 6 Dec 2021 00:52:44 +0000 (03:52 +0300)]
Assign extra bonus for previous move that caused a fail low more often
This patch allows to assign extra bonus for previous move that caused a fail low not only for PvNodes and cutNodes but also fo some allNodes - namely if the best result we could've got from the search is still far below alpha.
Initialize continuation history with a slighlty negative value -71 instead of zero.
The idea is, because the most history entries will be later negative anyway, to shift
the starting values a little bit in the "correct" direction. Of course the effect of
initialization dimishes with greater depth so I had the apprehension that the LTC test
would be difficult to pass, but it passed.
Same process as in https://github.com/official-stockfish/Stockfish/commit/e4a0c6c75950bf27b6dc32490a1102499643126b
with the training started from the current master net.
In their infinite wisdom, Intel axed AVX512 from Alder Lake
chips (well, not entirely, but we kind of want to use the Gracemont
cores for chess!) but still added VNNI support.
Confusingly enough, this is not the same as VNNI256 support.
This adds a specific AVX-VNNI target that will use this AVX-VNNI
mode, by prefixing the VNNI instructions with the appropriate VEX
prefix, and avoiding AVX512 usage.
This is about 1% faster on P cores:
Result of 20 runs
==================
base (./clang-bmi2 ) = 3306337 +/- 7519
test (./clang-vnni ) = 3344226 +/- 7388
diff = +37889 +/- 4153
speedup = +0.0115
P(speedup > 0) = 1.0000
But a nice 3% faster on E cores:
Result of 20 runs
==================
base (./clang-bmi2 ) = 1938054 +/- 28257
test (./clang-vnni ) = 1994606 +/- 31756
diff = +56552 +/- 3735
speedup = +0.0292
P(speedup > 0) = 1.0000
This was measured on Clang 13. GCC 11.2 appears to generate
worse code for Alder Lake, though the speedup on the E cores
is similar.
It is possible to run the engine specifically on the P or E using binding,
for example in linux it is possible to use (for an 8 P + 8 E setup like i9-12900K):
taskset -c 0-15 ./stockfish
taskset -c 16-23 ./stockfish
where the first call binds to the P-cores and the second to the E-cores.
bmc4 [Mon, 29 Nov 2021 17:51:35 +0000 (14:51 -0300)]
Correctly reset bestMoveChanges
for searches not using time management (e.g. analysis, fixed node game play etc),
bestMoveChanges was not reset during search iterations. As LMR uses this quantity,
search was somewhat weaker.
Tested using fixed node playing games:
```
./c-chess-cli -each nodes=10000 option.Hash=16 -engine cmd=../Stockfish/src/fix -engine cmd=../Stockfish/src/master -concurrency 6 -openings file=../books/UHO_XXL_+0.90_+1.19.epd -games 10000
Score of Stockfish Fix vs Stockfish Master: 3187 - 3028 - 3785 [0.508] 10000
noobpwnftw [Tue, 30 Nov 2021 01:22:07 +0000 (09:22 +0800)]
Enable compilation on older Windows systems
Improve compatibility of the last NUMA patch when running under older versions of Windows,
for instance Windows Server 2003. Reported by user "g3g6" in the following comments:
https://github.com/official-stockfish/Stockfish/commit/7218ec4df9fef1146a451b71f0ed3bfd8123c9f9
New net trained with nnue-pytorch, started from a master net on a data set of Leela
(T60.binpack+T74.binpck) Stockfish data (wrongIsRight_nodes5000pv2.binpack), and
Michael Babigian's conversion of T60 Leela data (including TB7 rescoring) (farseer.binpack)
available as a single interleaved binpack:
Michael Chaly [Sun, 28 Nov 2021 12:19:18 +0000 (15:19 +0300)]
Refine futility pruning for parent nodes
This patch is a result of refining of tuning vondele did after
new net passed and some hand-made values adjustements - excluding
changes in other pruning heuristics and rounding value of history
divisor to the nearest power of 2.
With this patch futility pruning becomes more aggressive and
history influence on it is doubled again.