We now use the same guard condition (testing that we already have a defense with
a score better score than a TB loss) for all pruning heuristics in qsearch().
This allows some pruning when in check, but in a controlled way to ensure that
no wrong mate scores appear.
Tomasz Sobczyk [Tue, 3 Nov 2020 21:49:10 +0000 (22:49 +0100)]
AVX-512 for smaller affine and feature transforms.
For the feature transformer the code is analogical to AVX2 since there was room for easy adaptation of wider simd registers.
For the smaller affine transforms that have 32 byte stride we keep 2 columns in one zmm register. We also unroll more aggressively so that in the end we have to do 16 parallel horizontal additions on ymm slices each consisting of 4 32-bit integers. The slices are embedded in 8 zmm registers.
These changes provide about 1.5% speedup for AVX-512 builds.
The advantage of using the normal time management for a single legal move is that scores
reported for that move are reasonable, not searching leads to artifacts during games
(see e.g. https://tcec-chess.com/#div=sf&game=96&season=19)
The disadvantage of using normal time management of a single legal move is that thinking
times can be unnaturally long, making it 'painful to watch' in online tournaments.
This patch uses normal time management, but caps the used time to 500ms.
This should lead to reasonable scores, and be hardly perceptible.
J. Oster [Sun, 1 Nov 2020 17:33:17 +0000 (18:33 +0100)]
Fix incorrect pruning in qsearch
Only do countermove based pruning in qsearch if we already have a move with a better score than a TB loss.
This patch fixes a bug (started as 843a961) that incorrectly prunes moves if in check,
and adds an assert to make sure no wrong mate scores are given in the future.
It replaces a no-op moveCount check with a check for bestValue.
Initially discussed in #3171 and later in #3199, #3198 and #3210.
This PR effectively closes #3171
It also likely fixes #3196 where this causes user visible incorrect TB scores,
which probably result from these incorrect mate scores.
Tomasz Sobczyk [Wed, 28 Oct 2020 23:14:53 +0000 (00:14 +0100)]
Optimize affine transform for SSSE3 and higher targets.
A non-functional speedup. Unroll the loops going over
the output dimensions in the affine transform layers by
a factor of 4 and perform 4 horizontal additions at a time.
Instead of doing naive horizontal additions on each vector
separately use hadd and shuffling between vectors to reduce
the number of instructions by using all lanes for all stages
of the horizontal adds.
syzygy1 [Tue, 27 Oct 2020 18:22:41 +0000 (19:22 +0100)]
Do not skip non-recapture ttMove when in check
The qsearch() MovePicker incorrectly skips a non-recapture ttMove
when in check (if depth <= DEPTH_QS_RECAPTURES). This is clearly not
intended and can cause qsearch() to return a mate score when there
is no mate. Introduced in cad300c and 6596f0e, as observed by
joergoster in #3171 and #3198.
This PR fixes the bug by not skipping the non-recapture ttMove when in check.
syzygy1 [Tue, 20 Oct 2020 19:06:06 +0000 (21:06 +0200)]
More incremental accumulator updates
This patch was inspired by c065abd which updates the accumulator,
if possible, based on the accumulator of two plies back if
the accumulator of the preceding ply is not available.
With this patch we look back even further in the position history
in an attempt to reduce the number of complete recomputations.
When we find a usable accumulator for the position N plies back,
we also update the accumulator of the position N-1 plies back
because that accumulator is most likely to be helpful later
when evaluating positions in sibling branches.
By not updating all intermediate accumulators immediately,
we avoid doing too much work that is not certain to be useful.
Overall, roughly 2-3% speedup.
This patch makes the code more specific to the net architecture,
changing input features of the net will require additional changes
to the incremental update code as discussed in the PR #3193 and #3191.
Vizvezdenec [Sat, 17 Oct 2020 11:40:10 +0000 (13:40 +0200)]
Do more reductions for late quiet moves in case of consecutive fail highs.
Idea of this patch can be described as following - in case we have consecutive fail highs and we reach late enough moves at root node probability of remaining quiet moves being able to produce even bigger value than moves that produced previous cutoff (so ones that should be high in move ordering but now they fail to produce beta cutoff because we actually reached high move count) should be quiet small so we can reduce them more.
The new net is based on the previous net 04cf2b4ed1da but with the biases
for the 1st hidden layer tuned SPSA, see the SPSA session on fishtest there:
https://tests.stockfishchess.org/tests/view/5f875213dcdad978fe8c5211
Thanks to @vondele for writing out the net, see discussion in this thread:
https://github.com/mstembera/Stockfish/commit/432da86721647dff1d9426a7cdcfd2dbada8155e
Further tune the net parameters, now the last but one layer (32x32).
To limit the number of parameters optimized, the network layer was
decomposed using SVD, and the singular values were treated
as parameters and tuned.
We now include the total pawn count in the scaling factor for the output
of the NNUE evaluation network. This should have the effect of trying to
keep more pawns when SF has the advantage, but exchange them when she
is defending.
Thanks to Alexander Pagel (Lolligerhans) for the idea of using the
value of pawns to ease the comparison with the rest of the material
estimation.
Replace log(i) with log(i + 0.25 * log(i)). This increases especially for low values the reductions. But for bigger values there are nearly no changes.
Idea is that division by fraction of 2 is slightly faster than by other numbers so parameters are adjusted in a way that division in null move pruning depth reduction features dividing by 256 instead of dividing by 213.
Other than this patch is almost non-functional - difference starts to exist by depth 133.
an optimization of Sergio's nn-03744f8d56d8.nnue tuning the output layer (33 parameters) on game play.
WIP code to make layer parameters tunable is https://github.com/vondele/Stockfish/tree/optionOutput
Optimization itself is using https://github.com/vondele/nevergrad4sf
Writing of the modified net using WIP code based on the learner code https://github.com/vondele/Stockfish/tree/evalWrite
Most parameters in the output layer are changed only little (~5 for int8_t).
Current master uses a constant scale factor of 5/4 = 1.25 for the output
of the NNUE network, for compatibility with search and classical evaluation.
We modify this scale factor to make it dependent on the phase of the game,
going from about 1.5 in the opening to 1.0 for pure pawn endgames.
This helps Stockfish to avoid exchanges of pieces (heavy pieces in particular)
when she has the advantage, keeping more material on the board when attacking.
Introduce a small chance of switching to NNUE if PSQ imbalance is large but we have opposite colored bishops and the classical eval is struggling to win.
On Windows, Stockfish wouldn't launch in some GUI because we output some
info strings (about the use of large pages) before sending the 'uci'
command. It seems more robust to suppress these info strings, and instead
to add a proper section section in the Readme about large pages use.
- Clean signature of functions in namespace NNUE
- Add comment for countermove based pruning
- Remove bestMoveCount variable
- Add const qualifier to kpp_board_index array
- Fix spaces in get_best_thread()
- Fix indention in capture LMR code in search.cpp
- Rename TtmemDeleter to LargePageDeleter
Sami Kiminki [Sun, 30 Aug 2020 16:41:30 +0000 (19:41 +0300)]
Add large page support for NNUE weights and simplify TT mem management
Use TT memory functions to allocate memory for the NNUE weights. This
should provide a small speed-up on systems where large pages are not
automatically used, including Windows and some Linux distributions.
Further, since we now have a wrapper for std::aligned_alloc(), we can
simplify the TT memory management a bit:
- We no longer need to store separate pointers to the hash table and
its underlying memory allocation.
- We also get to merge the Linux-specific and default implementations
of aligned_ttmem_alloc().
Finally, we'll enable the VirtualAlloc code path with large page
support also for Win32.
Use tiling to speed up accumulator refreshes and updates
Perform the update and refresh operations tile by tile in a local
array of vectors. By selecting the array size carefully, we
achieve that the compiler keeps the whole array in vector registers.
This PR sets the "comp" variable simply to "clang",
which seems to be more consistent and allows a small simplification.
The PR also moves the section that sets "profile_make" and "profile_use" to after the NDK section,
which ensures that these variables are now set correctly for NDK/clang.
NNUE appears to provide a more stable eval than the classic eval,
so the time use dependencies on bestMoveChanges, fallingEval,
etc may need to change to make the best use of available time.
This change doubles the effect of totBestMoveChanges when giving
more time because the choice of best move is unstable.
No need to initialize StatScore at rootNode. Current Logic is redundant because at subsequent levels the grandchildren statScore is initialized to zero.
This patch doubles the moderate imbalance threshold and probability of using classical eval.
So now if imbalance is greater than PawnValueMg / 4 then there is a 1/8 chance of using classical eval.
Restore the default NNUE setting (enabled) after a bench command.
This also makes the resulting program settings independent of the
number of FENs that are being benched.
Bug fix in do_null_move() and NNUE simplification.
This fixes #3108 and removes some NNUE code that is currently not used.
At the moment, do_null_move() copies the accumulator from the previous
state into the new state, which is correct. It then clears the "computed_score"
flag because the side to move has changed, and with the other side to move
NNUE will return a completely different evaluation (normally with changed
sign but also with different NNUE-internal tempo bonus).
The problem is that do_null_move() clears the wrong flag. It clears the
computed_score flag of the old state, not of the new state. It turns out
that this almost never affects the search. For example, fixing it does not
change the current bench (but it does change the previous bench). This is
because the search code usually avoids calling evaluate() after a null move.
This PR corrects do_null_move() by removing the computed_score flag altogether.
The flag is not needed because nnue_evaluate() is never called twice on a position.
This PR also removes some unnecessary {}s and inserts a few blank lines
in the modified NNUE files in line with SF coding style.
This patch changes how previous early moves are penalized in case
search finds a best move. Here, the first quiet move that was not
a transposition table move is penalized.
It is our pleasure to release Stockfish 12 to users world-wide
Downloads will be freely available at
https://stockfishchess.org/download/
This version 12 of Stockfish plays significantly stronger than
any of its predecessors. In a match against Stockfish 11,
Stockfish 12 will typically win at least ten times more game pairs
than it loses.
This jump in strength, visible in regular progression tests during
development[1], results from the introduction of an efficiently
updatable neural network (NNUE) for the evaluation in Stockfish[2],
and associated tuning of the engine as a whole. The concept of the
NNUE evaluation was first introduced in shogi, and ported to
Stockfish afterward. Stockfish remains a CPU-only engine, since the
NNUE networks can be very efficiently evaluated on CPUs. The
recommended parameters of the NNUE network are embedded in
distributed binaries, and Stockfish will use NNUE by default.
Both the NNUE and the classical evaluations are available, and
can be used to assign values to positions that are later used in
alpha-beta (PVS) search to find the best move. The classical
evaluation computes this value as a function of various chess
concepts, handcrafted by experts, tested and tuned using fishtest.
The NNUE evaluation computes this value with a neural network based
on basic inputs. The network is optimized and trained on the
evaluations of millions of positions.
The Stockfish project builds on a thriving community of enthusiasts
that contribute their expertise, time, and resources to build a free
and open source chess engine that is robust, widely available, and
very strong. We invite chess fans to join the fishtest testing
framework and programmers to contribute on github[3].
Embed default net, and simplify using non-default nets
covers the most important cases from the user perspective:
It embeds the default net in the binary, so a download of that binary will result
in a working engine with the default net. The engine will be functional in the default mode
without any additional user action.
It allows non-default nets to be used, which will be looked for in up to
three directories (working directory, location of the binary, and optionally a specific default directory).
This mechanism is also kept for those developers that use MSVC,
the one compiler that doesn't have an easy mechanism for embedding data.
It is possible to disable embedding, and instead specify a specific directory, e.g. linux distros might want to use
CXXFLAGS="-DNNUE_EMBEDDING_OFF -DDEFAULT_NNUE_DIRECTORY=/usr/share/games/stockfish/" make -j ARCH=x86-64 profile-build
syzygy1 [Mon, 24 Aug 2020 00:29:38 +0000 (02:29 +0200)]
Remove EvalList
This patch removes the EvalList structure from the Position object and generally simplifies the interface between do_move() and the NNUE code.
The NNUE evaluation function first calculates the "accumulator". The accumulator consists of two halves: one for white's perspective, one for black's perspective.
If the "friendly king" has moved or the accumulator for the parent position is not available, the accumulator for this half has to be calculated from scratch. To do this, the NNUE node needs to know the positions and types of all non-king pieces and the position of the friendly king. This information can easily be obtained from the Position object.
If the "friendly king" has not moved, its half of the accumulator can be calculated by incrementally updating the accumulator for the previous position. For this, the NNUE code needs to know which pieces have been added to which squares and which pieces have been removed from which squares. In principle this information can be derived from the Position object and StateInfo struct (in the same way as undo_move() does this). However, it is probably a bit faster to prepare this information in do_move(), so I have kept the DirtyPiece struct. Since the DirtyPiece struct now stores the squares rather than "PieceSquare" indices, there are now at most three "dirty pieces" (previously two). A promotion move that captures a piece removes the capturing pawn and the captured piece from the board (to SQ_NONE) and moves the promoted piece to the promotion square (from SQ_NONE).
to prevent user errors or generating untested code,
check explicitly that the ARCH variable is equivalent to a supported architecture
as listed in `make help`.
To nevertheless compile for an untested target the user can override the internal
variable, passing the undocumented `SUPPORTED_ARCH=true` to make.
Vizvezdenec [Mon, 24 Aug 2020 05:04:16 +0000 (08:04 +0300)]
Introduce countermove based pruning for qsearch
This patch continues work of previous patch in introducing pruning heuristics in qsearch by analogy to main search, now with countermove based pruning.
Idea is that if move is late enough and is quite check (we do generate them in qsearch) and has bad enough countermove history - prune it.
Sami Kiminki [Fri, 21 Aug 2020 09:12:39 +0000 (12:12 +0300)]
Allow TT entries with key16==0 to be fetched
Fix the issue where a TT entry with key16==0 would always be reported
as a miss. Instead, we'll use depth8 to detect whether the TT entry is
occupied. In order to do that, we'll change DEPTH_OFFSET to -7
(depth8==0) to distinguish between an unoccupied entry and the
otherwise lowest possible depth, i.e., DEPTH_NONE (depth8==1).
To prevent a performance regression, we'll reorder the TT entry fields
by the access order of TranspositionTable::probe(). Memory in general
works fastest when accessed in sequential order. We'll also match the
store order in TTEntry::save() with the entry field order, and
re-order the 'if-or' expressions in TTEntry::save() from the cheapest
to the most expensive.
Finally, as we now have a proper TT entry occupancy test, we'll fix a
minor corner case with hashfull reporting. To reproduce:
- Use a big hash
- Either:
a. Start 31 very quick searches (this wraparounds generation to 0); or
b. Force generation of the first search to 0.
- go depth infinite
Before the fix, hashfull would incorrectly report nearly full hash
immediately after the search start, since
TranspositionTable::hashfull() used to consider only the entry
generation and not whether the entry was actually occupied.
mstembera [Thu, 20 Aug 2020 23:59:27 +0000 (16:59 -0700)]
Support VNNI on 256bit vectors
due to downclocking on current chips (tested up to cascade lake)
supporting avx512 and vnni512, it is better to use avx2 or vnni256
in multithreaded (in particular hyperthreaded) engine use.
In single threaded use, the picture is different.
gcc compilation for vnni256 requires a toolchain for gcc >= 9.
George Sobala [Mon, 24 Aug 2020 05:37:42 +0000 (06:37 +0100)]
armv8 AArch64 does not require -mfpu=neon
-mpfu is not required on AArch64 / armv8 architecture on Linux and throws an error if present.
This PR has been tested on gcc and clang on Gentoo-64 and Raspian-64 on a Raspberry Pi 4,
as well as with a cross from Ubuntu
(`make clean && make -j build ARCH=armv8 COMP=gcc COMPILER=aarch64-linux-gnu-g++`)
Vizvezdenec [Sun, 23 Aug 2020 11:22:32 +0000 (14:22 +0300)]
Introduce movecount pruning for qsearch()
If in quiescence search, we assume that me can prune late moves when:
a) the move ordering count of the move is : moveCount > abs(depth) + 2
b) we are not in check
c) the late move does not give check
d) the late move is not an advanced pawn push
syzygy1 [Sat, 22 Aug 2020 11:36:34 +0000 (13:36 +0200)]
Skip the alignment bug workaround for Clang
Clang-10.0.0 poses as gcc-4.2:
$ clang++ -E -dM - </dev/null | grep GNUC
This means that Clang is using the workaround for the alignment bug of gcc-8
even though it does not have the bug (as far as I know).
This patch should speed up AVX2 and AVX512 compiles on Windows (when using Clang),
because it disables (for Clang) the gcc workaround we had introduced in this commit:
https://github.com/official-stockfish/Stockfish/commit/875183b310a8249922c2155e82cb4cecfae2097e
Stéphane Nicolet [Sat, 22 Aug 2020 09:37:53 +0000 (11:37 +0200)]
Instructions to build on older Macintosh
In recent Macs, it is possible to use the Clang compiler provided by Apple
to compile Stockfish out of the box, and this is the method used by default
in our Makefile (the Makefile sets the macosx-version-min=10.14 flag to select
the right libc++ library for the Clang compiler with recent c++17 support).
But it is quite possible to compile and run Stockfish on older Macs! Below
we describe a method to install a recent GNU compiler on these Macs, to get
the c++17 support. We have tested the following procedure to install gcc10 on
machines running Mac OS 10.7, Mac OS 10.9 and Mac OS 10.13:
1) install XCode for your machine.
2) install Apple command-line developer tools for XCode, by typing the following
command in a Terminal:
```
sudo xcode-select --install
```
3) go to the Stockfish "src" directory, then try a default build and run Stockfish:
```
make clean
make build
make net
./stockfish
```
4) if step 3 worked, congrats! You have a compiler recent enough on your Mac
to compile Stockfish. If not, continue with step 5 to install GNU gcc10 :-)
5) install the MacPorts package manager (https://www.macports.org/install.php),
for instance using the fast method in the "macOS Package (.pkg) Installer"
section of the page.
6) use the "port" command to install the gcc10 package of MacPorts by typing the
following command:
```
sudo port install gcc10
```
With this step, MacPorts will install the gcc10 compiler under the name "g++-mp-10"
in the /opt/local/bin directory:
```
which g++-mp-10
/opt/local/bin/g++-mp-10 <--- answer
```
7) You can now go back to the "src" directory of Stockfish, and try to build
Stockfish by pointing at the right compiler:
```
make clean
make build COMP=gcc COMPCXX=/opt/local/bin/g++-mp-10
make net
./stockfish
```
8) Enjoy Stockfish on Macintosh!
See this pull request for further discussion:
https://github.com/official-stockfish/Stockfish/pull/3049
gsobala [Fri, 21 Aug 2020 10:28:53 +0000 (11:28 +0100)]
Update Makefile for macOS
Changes to deal with compilation (particularly profile-build) on macOS.
(1) The default toolchain has gcc masquerading as clang,
the previous Makefile was not picking up the required changes
to the different profiling tools.
(2) The previous Makefile test for gccisclang occurred before
a potential overwrite of CXX by COMPCXX
(3) llvm-profdata no longer runs as a command on macOS and
instead is invoked by ``xcrun llvm-profdata``
(4) Needs to support use of true gcc using e.g.
``make build ... COMPCXX=g++-10``
(5) enable profile-build in travis for macOS
Since the initial stages of the merge, progress has been made so that
this seems the best option now:
* NNUE is clearly stronger on most relevant hardware and time controls
* All of our CI and testing infrastructure has been adjusted
* The default net is easy to get (further ideas #3030)
some GUIs do not show the error message when the engine terminates in the no-net case, as it is send to cerr.
Instead send it as an info string, which the GUI will more likely display.
syzygy1 [Mon, 17 Aug 2020 23:56:12 +0000 (01:56 +0200)]
Expanded support for x86-32 architectures.
add new ARCH targets
x86-32-sse41-popcnt > x86 32-bit with sse41 and popcnt support
x86-32-sse2 > x86 32-bit with sse2 support
x86-32 > x86 32-bit generic (with mmx and sse support)
mstembera [Sun, 16 Aug 2020 22:23:50 +0000 (15:23 -0700)]
Fallback to NNUE
If the classical eval ends up much smaller than estimated fall back to NNUE.
Also use multiply instead of divide for the threshold comparison for smoother transitions without rounding.
notruck [Sun, 16 Aug 2020 15:59:13 +0000 (08:59 -0700)]
Support building for Android using NDK
The easiest way to use the NDK in conjunction with this Makefile (tested on linux-x86_64):
1. Download the latest NDK (r21d) from Google from https://developer.android.com/ndk/downloads
2. Place and unzip the NDK in $HOME/ndk folder
3. Export the path variable e.g., `export PATH=$PATH:$HOME/ndk/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin`
4. cd to your Stockfish/src dir
5. Issue `make -j ARCH=armv8 COMP=ndk build` (use `ARCH=armv7` or `ARCH=armv7-neon` for older CPUs)
6. Optionally `make -j ARCH=armv8 COMP=ndk strip`
7. That's all. Enjoy!
Improves support from Raspberry Pi (incomplete?) and compiling on arm in general
Support is still fragile as we're missing CI on these targets. Nevertheless tested with:
```bash
# build crosses from ubuntu 20.04 on x86 to various arch/OS combos
# tested with suitable packages installed
# (build-essentials, mingw-w64, g++-arm-linux-gnueabihf, NDK (r21d) from google)
# cross to Android
export PATH=$HOME/ndk/android-ndk-r21d/toolchains/llvm/prebuilt/linux-x86_64/bin:$PATH
make clean && make -j build ARCH=armv7 COMP=ndk && make -j build ARCH=armv7 COMP=ndk strip
make clean && make -j build ARCH=armv7-neon COMP=ndk && make -j build ARCH=armv7-neon COMP=ndk strip
make clean && make -j build ARCH=armv8 COMP=ndk && make -j build ARCH=armv8 COMP=ndk strip
# cross to Raspberry Pi
make clean && make -j build ARCH=armv7 COMP=gcc COMPILER=arm-linux-gnueabihf-g++
make clean && make -j build ARCH=armv7-neon COMP=gcc COMPILER=arm-linux-gnueabihf-g++
# cross to Windows
make clean && make -j build ARCH=x86-64-modern COMP=mingw
```