Disservin [Wed, 5 Jun 2024 16:31:11 +0000 (18:31 +0200)]
Update clang-format to version 18
clang-format-18 is available in ubuntu noble(24.04), if you are on
a version lower than that you can use the update script from llvm.
https://apt.llvm.org/
Windows users should be able to download and use clang-format from
their release builds https://github.com/llvm/llvm-project/releases
or get the latest from msys2
https://packages.msys2.org/package/mingw-w64-x86_64-clang.
macOS users can resort to "brew install clang-format".
Viren6 [Wed, 5 Jun 2024 02:24:39 +0000 (03:24 +0100)]
Use futility margin in razoring margin
Uses futilityMargin * depth to set the razoring margin. This retains the
quadratic depth scaling to preserve mate finding capabilities. This patch is
nice because it increases the elo sensitivity of the futility margin
heuristics.
Tomasz Sobczyk [Tue, 4 Jun 2024 10:48:13 +0000 (12:48 +0200)]
Add NumaPolicy "hardware" option that bypasses current processor affinity.
Can be used in case a GUI (e.g. ChessBase 17 see #5307) sets affinity to a
single processor group, but the user would like to use the full capabilities of
the hardware. Improves affinity handling on Windows in case of multiple
available APIs and existing affinities.
Recently when I overhauled these comments, Disservin asked why these
were so much lower: they're a relic from when we had a third QS stage at
-5. Now we don't, so fix these to the obvious place.
I was fairly sure it was nonfunctional but ran the nonreg to be double
sure.
Disservin [Fri, 31 May 2024 08:53:10 +0000 (10:53 +0200)]
Add helpers for managing aligned memory
Previously, we had two type aliases, LargePagePtr and AlignedPtr, which
required manually initializing the aligned memory for the pointer.
The new helpers:
- make_unique_aligned
- make_unique_large_page
are now available for allocating aligned memory (with large pages). They
behave similarly to std::make_unique, ensuring objects allocated with
these functions follow RAII.
The old approach had issues with initializing non-trivial types or
arrays of objects. The evaluation function of the network is now a
unique pointer to an array instead of an array of unique pointers.
Memory related functions have been moved into memory.h
Michael Chaly [Sat, 1 Jun 2024 17:44:06 +0000 (20:44 +0300)]
Adjust return bonus from tt cutoffs at fail highs
This is reintroduction of the recently simplified logic - if positive tt cutoff
occurs return not a tt value but smth between it and beta. Difference is that
instead of static linear combination there we use basically the same formula as
we do in the main search - with the only difference being using tt depth
instead of depth, which makes a lot of sense.
Created by training L1-128 from scratch with:
- skipping based on simple eval in the trainer, for compatibility with
regular binpacks without requiring pre-filtering all binpacks
- minimum simple eval of 950, lower than 1000 previously
- usage of some hse-v1 binpacks with minimum simple eval 1000
- addition of hse-v6 binpacks with minimum simple eval 500
- permuting the FT with 10k positions from fishpack32.binpack
- torch.compile to speed up smallnet training
Training is significantly slower when using non-pre-filtered binpacks due to
the increased skipping required.
rn5f107s2 [Thu, 30 May 2024 19:18:42 +0000 (21:18 +0200)]
MCP more after a bad singular search
The idea is, that if we have the information that the singular search failed low and therefore produced an upperbound score, we can use the score from singularsearch as approximate upperbound as to what bestValue our non ttMoves will produce. If this value is well below alpha, we assume that all non-ttMoves will score below alpha and therfore can skip more moves.
This patch also sets up variables for future patches wanting to use teh singular search result outside of singular extensions, in singularBound and singularValue, meaning further patches using this search result to affect various pruning techniques can be tried.
FauziAkram [Fri, 31 May 2024 01:01:02 +0000 (04:01 +0300)]
Tweak first picked move (ttMove) reduction rule
Tweak first picked move (ttMove) reduction rule:
Instead of always resetting the reduction to 0, we now only do so if the current reduction is less than 2.
If the current reduction is 2 or more, we decrease it by 2 instead.
Tomasz Sobczyk [Thu, 30 May 2024 10:56:44 +0000 (12:56 +0200)]
Fix process' processor affinity determination on Windows.
Specialize and privatize NumaConfig::get_process_affinity.
Only enable NUMA capability for 64-bit Windows.
Following #5307 and some more testing it was determined that the way affinity
was being determined on Windows was incorrect, based on incorrect assumptions
about GetNumaProcessorNodeEx.
This patch fixes the issue by attempting to retrieve the actual process'
processor affinity using Windows API. However one issue persists that is not
addressable due to limitations of Windows, and will have to be considered a
limitation. If affinities were set using SetThreadAffinityMask instead of
SetThreadSelectedCpuSetMasks and GetProcessGroupAffinity returns more than 1
group it is NOT POSSIBLE to determine the affinity programmatically on Windows.
In such case the implementation assumes no affinites are set and will consider
all processors available for execution.
Michael Chaly [Thu, 30 May 2024 16:27:12 +0000 (19:27 +0300)]
Allow tt cutoffs for shallower depths in certain conditions
Current master allows tt cutoffs only when depth
from tt is strictly greater than current node
depth. This patch also allows them when it's equal
and if tt value is lower or equal to beta.
This PR updates the internal WDL model, using data from 2.5M games played by SF-dev (3c62ad7).
Note that the normalizing constant has increased from 329 to 368.
Changes to the fitting procedure:
* the value for --materialMin was increased from 10 to 17: including data with less material leads to less accuracy for larger material count values
* the data was filtered to only include single thread LTC games at 60+0.6
* the data was filtered to only include games from master against patches that are (approximatively) within 5 nElo of master
For more information and plots of the model see PR#5309
Created by further tuning the spsa-tuned main net `nn-c721dfca8cd3.nnue`
with the same methods described in https://github.com/official-stockfish/Stockfish/pull/5254
This net was reached at 61k / 120k spsa games at 70+0.7 th 7:
https://tests.stockfishchess.org/tests/view/665639d0a86388d5e27dd259
xoto10 [Tue, 28 May 2024 18:40:40 +0000 (19:40 +0100)]
Add compensation factor to adjust extra time according to time control
As stockfish nets and search evolve, the existing time control appears
to give too little time at STC, roughly correct at LTC, and too little
at VLTC+.
This change adds an adjustment to the optExtra calculation. This
adjustment is easy to retune and refine, so it should be easier to keep
up-to-date than the more complex calculations used for optConstant and
optScale.
This simplification patch merges the pawn count terms in the eval
formula with the material term, updating the offset constant for
the nnue part of the formula from 34000 to 34300 because the average
pawn count in middlegame positions evaluated during search is around 8.
Tomasz Sobczyk [Fri, 17 May 2024 10:10:31 +0000 (12:10 +0200)]
Improve performance on NUMA systems
Allow for NUMA memory replication for NNUE weights. Bind threads to ensure execution on a specific NUMA node.
This patch introduces NUMA memory replication, currently only utilized for the NNUE weights. Along with it comes all machinery required to identify NUMA nodes and bind threads to specific processors/nodes. It also comes with small changes to Thread and ThreadPool to allow easier execution of custom functions on the designated thread. Old thread binding (WinProcGroup) machinery is removed because it's incompatible with this patch. Small changes to unrelated parts of the code were made to ensure correctness, like some classes being made unmovable, raw pointers replaced with unique_ptr. etc.
Windows 7 and Windows 10 is partially supported. Windows 11 is fully supported. Linux is fully supported, with explicit exclusion of Android. No additional dependencies.
-----------------
A new UCI option `NumaPolicy` is introduced. It can take the following values:
```
system - gathers NUMA node information from the system (lscpu or windows api), for each threads binds it to a single NUMA node
none - assumes there is 1 NUMA node, never binds threads
auto - this is the default value, depends on the number of set threads and NUMA nodes, will only enable binding on multinode systems and when the number of threads reaches a threshold (dependent on node size and count)
[[custom]] -
// ':'-separated numa nodes
// ','-separated cpu indices
// supports "first-last" range syntax for cpu indices,
for example '0-15,32-47:16-31,48-63'
```
Setting `NumaPolicy` forces recreation of the threads in the ThreadPool, which in turn forces the recreation of the TT.
The threads are distributed among NUMA nodes in a round-robin fashion based on fill percentage (i.e. it will strive to fill all NUMA nodes evenly). Threads are bound to NUMA nodes, not specific processors, because that's our only requirement and the OS can schedule them better.
Special care is made that maximum memory usage on systems that do not require memory replication stays as previously, that is, unnecessary copies are avoided.
On linux the process' processor affinity is respected. This means that if you for example use taskset to restrict Stockfish to a single NUMA node then the `system` and `auto` settings will only see a single NUMA node (more precisely, the processors included in the current affinity mask) and act accordingly.
-----------------
We can't ensure that a memory allocation takes place on a given NUMA node without using libnuma on linux, or using appropriate custom allocators on windows (https://learn.microsoft.com/en-us/windows/win32/memory/allocating-memory-from-a-numa-node), so to avoid complications the current implementation relies on first-touch policy. Due to this we also rely on the memory allocator to give us a new chunk of untouched memory from the system. This appears to work reliably on linux, but results may vary.
MacOS is not supported, because AFAIK it's not affected, and implementation would be problematic anyway.
Windows is supported since Windows 7 (https://learn.microsoft.com/en-us/windows/win32/api/processtopologyapi/nf-processtopologyapi-setthreadgroupaffinity). Until Windows 11/Server 2022 NUMA nodes are split such that they cannot span processor groups. This is because before Windows 11/Server 2022 it's not possible to set thread affinity spanning processor groups. The splitting is done manually in some cases (required after Windows 10 Build 20348). Since Windows 11/Server 2022 we can set affinites spanning processor group so this splitting is not done, so the behaviour is pretty much like on linux.
Linux is supported, **without** libnuma requirement. `lscpu` is expected.
Linmiao Xu [Fri, 24 May 2024 14:58:13 +0000 (10:58 -0400)]
Lower smallnet threshold with tuned eval params
The smallnet threshold is now below the training data range
of the current smallnet (simple eval diff > 1k, nn-baff1edelf90.nnue)
when no pawns are on the board.
Params found with spsa at 93k / 120k games at 60+06:
https://tests.stockfishchess.org/tests/view/664fa166a86388d5e27d7d6b
Tuned on top of: https://github.com/official-stockfish/Stockfish/pull/5287
Muzhen Gaming [Thu, 23 May 2024 00:28:46 +0000 (08:28 +0800)]
VVLTC search tune
Parameters were tuned in 2 stages:
1. 127k games at VVLTC:
https://tests.stockfishchess.org/tests/view/6649f8dfb8fa20e74c39f52a.
2. 106k games at VVLTC:
https://tests.stockfishchess.org/tests/view/664bfb77830eb9f886615a9d.
Muzhen Gaming [Wed, 22 May 2024 01:09:04 +0000 (09:09 +0800)]
Revert "Reduce When TTValue is Above Alpha"
The patch regressed significantly at longer time controls. In
particular, the `depth--` behavior was predicted to scale badly based on
data from other variations of the patch.
cj5716 [Sun, 19 May 2024 05:15:42 +0000 (13:15 +0800)]
Optimise pairwise multiplication
This speedup was first inspired by a comment by @AndyGrant on my recent
PR "If mullo_epi16 would preserve the signedness, then this could be
used to remove 50% of the max operations during the halfkp-pairwise
mat-mul relu deal."
That got me thinking, because although mullo_epi16 did not preserve the
signedness, mulhi_epi16 did, and so we could shift left and then use
mulhi_epi16, instead of shifting right after the mullo.
However, due to some issues with shifting into the sign bit, the FT
weights and biases had to be multiplied by 2 for the optimisation to
work.
Speedup on "Arch=x86-64-bmi2 COMP=clang", courtesy of @Torom
Result of 50 runs
base (...es/stockfish) = 962946 +/- 1202
test (...ise-max-less) = 979696 +/- 1084
diff = +16750 +/- 1794
speedup = +0.0174
P(speedup > 0) = 1.0000
CPU: 4 x Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Hyperthreading: on
Also a speedup on "COMP=gcc", courtesy of Torom once again
Result of 50 runs
base (...tockfish_gcc) = 966033 +/- 1574
test (...max-less_gcc) = 983319 +/- 1513
diff = +17286 +/- 2515
speedup = +0.0179
P(speedup > 0) = 1.0000
CPU: 4 x Intel(R) Core(TM) i7-6700K CPU @ 4.00GHz
Hyperthreading: on
Viren6 [Sun, 19 May 2024 01:58:01 +0000 (02:58 +0100)]
Addition of new scaling comments
This patch is intended to prevent patches like 9b90cd8 and the
subsequent reversion e3c9ed7 from happening again. Scaling behaviour of
the reduction adjustments in the non-linear scaling
section have been proven to >8 sigma:
The else if condition is moved to the non scaling section based on:
https://tests.stockfishchess.org/tests/view/664567a193ce6da3e93b3232 (It
has no proven scaling)
General comment improvements and removal of a redundant margin condition
have also been included.
Also "fix" movepicker to allow depths between CHECKS and NO_CHECKS,
which makes them easier to tweak (not that they get tweaked hardly ever)
(This was more beneficial when there was a third stage to DEPTH_QS, but
it's still an improvement now)
Second spsa with 30k / 120k games at 60+0.6:
https://tests.stockfishchess.org/tests/view/664be227830eb9f886615a36
Values found at 10k games at 60+0.6 also passed STC and LTC:
https://tests.stockfishchess.org/tests/view/664bf4bd830eb9f886615a72
https://tests.stockfishchess.org/tests/view/664c0905830eb9f886615abf
FauziAkram [Mon, 20 May 2024 23:19:54 +0000 (02:19 +0300)]
Refine Evaluation Scaling with Piece-Specific Weights
Refine Evaluation Scaling with Piece-Specific Weights, instead of the simplified npm method.
I took the initial idea from Viren6 , as he worked on it in September of last year.
I worked on it, and tuned it, and now it passed both tests.
Michael Chaly [Mon, 20 May 2024 21:40:55 +0000 (00:40 +0300)]
Rescale pawn history updates
This patch is somewhat of a continuation of recent pawn history gainers.
It makes pawn history updates after search twice smaller. Since on average they make pawn history more negative offset is changed to lower value to remain average value approximately the same.
Michael Chaly [Mon, 20 May 2024 00:22:40 +0000 (03:22 +0300)]
Update correction history in case of successful null move pruning
Since null move pruning uses the same position it makes some sense to try to update correction history there in case of fail high.
Update value is 4 times less than normal update.
Linmiao Xu [Sun, 19 May 2024 18:01:49 +0000 (14:01 -0400)]
Re-eval only if smallnet output flips from simple eval
Recent attempts to change the smallnet nnue re-eval
threshold did not show much elo difference:
https://tests.stockfishchess.org/tests/view/664a29bb25a9058c4d21d53c
https://tests.stockfishchess.org/tests/view/664a299925a9058c4d21d53a
Michael Chaly [Sun, 19 May 2024 15:48:43 +0000 (18:48 +0300)]
Do more aggressive pawn history updates
Tweak of recent patch that made pawn history to update for move that caused a fail low - and setting up default value of it to -900. This patch makes it more aggressive - twice bigger updates and default value -1100.
Tweak continuation history bonus dependent on ply.
This patch is based on following tuning https://tests.stockfishchess.org/tests/view/6648b2eb308cceea45533abe by only using the tuned factors for the continuation history.
The unusual result of (combined) +12.0 +- 3.7 in the 2 VVLTC simplification SPRTs ran was the result of base having only 64MB of hash instead of 512MB (Asymmetric hash).
Vizvezdenec was the one to notice this.
FauziAkram [Fri, 17 May 2024 22:22:41 +0000 (01:22 +0300)]
Early Exit in Bitboards::sliding_attack()
he original code checks for occupancy within the loop condition. By moving this check inside the loop and adding an early exit condition, we can avoid unnecessary iterations if a blocking piece is encountered.
Created by first retraining the spsa-tuned main net `nn-ae6a388e4a1a.nnue` with:
- using v6-dd data without bestmove captures removed
- addition of T80 mar2024 data
- increasing loss by 20% when Q is too high
- torch.compile changes for marginal training speed gains
And then SPSA tuning weights of epoch 899 following methods described in:
https://github.com/official-stockfish/Stockfish/pull/5149
This net was reached at 92k out of 120k steps in this 70+0.7 th 7 SPSA tuning run:
https://tests.stockfishchess.org/tests/view/66413b7df9f4e8fc783c9bbb
Thanks to @Viren6 for suggesting usage of:
- c value 4 for the weights
- c value 128 for the biases
Scripts for automating applying fishtest spsa params to exporting tuned .nnue are in:
https://github.com/linrock/nnue-tools/tree/master/spsa
Reduce more when improving and ttvalue is lower than alpha
More reduction if position is improving but value from TT doesn't
exceeds alpha but the tt move is excluded.
This idea is based on following LMR condition tuning
https://tests.stockfishchess.org/tests/view/66423a1bf9f4e8fc783cba37
by using only three of the four largest terms P[3], P[18] and P[12].
Michael Chaly [Tue, 14 May 2024 17:10:01 +0000 (20:10 +0300)]
Add extra bonus to pawn history for a move that caused a fail low
Basically the same idea as it is for continuation/main history, but it
has some tweaks.
1) it has * 2 multiplier for bonus instead of full/half bonus - for
whatever reason this seems to work better;
2) attempts with this type of big bonuses scaled somewhat poorly (or
were unlucky at longer time controls), but after measuring the fact
that average value of pawn history in LMR after adding this bonuses
increased by substantial number (for multiplier 1,5 it increased by
smth like 400~ from 8192 cap) attempts were made to make default pawn
history negative to compensate it - and version with multiplier 2 and
initial fill value -900 passed.
xoto10 [Mon, 13 May 2024 06:19:18 +0000 (07:19 +0100)]
Use 5% less time on first move
Stockfish appears to take too much time on the first move of a game and
then not enough on moves 2,3,4... Probably caused by most of the factors
that increase time usually applying on the first move.
Attempts to give more time to the subsequent moves have not worked so
far, but this change to simply reduce first move time by 5% worked.
Linmiao Xu [Thu, 9 May 2024 18:03:35 +0000 (14:03 -0400)]
Re-evaluate some small net positions for more accurate evals
Use main net evals when small net evals hint that higher eval
accuracy may be worth the slower eval speeds. With Finny caches,
re-evals with the main net are less expensive than before.
Original idea by mstembera who I've added as co-author to this PR.