Explicitly use alpha+1 for beta in NonPV search (#939)
Fixes the only exception, in razoring.
The code already does assert(PvNode || (alpha == beta - 1)), and it can be verified by studying the program flow that this is indeed the case, also for the modified line.
This patch tweaks some pawn values to favor flank attacks.
The first part of the patch increases the midgame psqt values of external pawns to launch more attacks (credits to user GuardianRM for this idea), while the second part increases the endgame connection values for pawns on upper ranks.
Andrey Neporada [Sat, 3 Dec 2016 08:37:07 +0000 (12:37 +0400)]
Help GCC to optimize msb() to single instruction
GCC compiles builtin_clzll to “63 ^ BSR”. BSR is processor instruction "Bit Scan Reverse".
So old msb() function is basically 63 - 63 ^ BSR.
Unfortunately, GCC fails to simplify this expression.
Marco Costalba [Fri, 25 Nov 2016 15:51:24 +0000 (16:51 +0100)]
Fix compile under Windows XP
The needed Windows API for processor groups could be missed from old Windows
versions, so instead of calling them directly (forcing the linker to resolve
the calls at compile time), try to load them at runtime. To do this we need
first to define the corresponding function pointers.
Also don't interfere with running fishtest on numa hardware with Windows.
Avoid all stockfish one-threaded processes will run on the same node
Aram Tumanian [Mon, 21 Nov 2016 15:17:47 +0000 (17:17 +0200)]
Fix the pawn hash failure when the pawn key is 0
This patch fixed bugs #859 and #882.
At initialization we generate a new random key (Zobrist::noPawns).
It's added to the pawn key of all positions, so that the pawn key
of a pawnless position is no longer 0.
Marco Costalba [Tue, 22 Nov 2016 06:41:46 +0000 (07:41 +0100)]
Handle Windows Processors Groups
Under Windows it is not possible for a process to run on more than one
logical processor group. This usually means to be limited to use max 64
cores. To overcome this, some special platform specific API should be
called to set group affinity for each thread. Original code from Texel by
Peter Österlund.
Tested by Jean-Paul Vael on a Xeon E7-8890 v4 with 88 threads and confimed
speed up between 44 and 88 threads is about 30%, as expected.
Makes the actual number of nodes searched match closely
the number of nodes requested, by increasing the frequency
of checking the number of nodes searched at low node count.
All other searches retain the default checking frequency of
once per 4096 nodes, and are thus unaffected.
Aram Tumanian [Fri, 11 Nov 2016 13:02:28 +0000 (15:02 +0200)]
Make a version of Position::do_move() without the givesCheck parameter
In 10 of 12 calls total to Position::do_move()the givesCheck argument is
simply gives_check(m). So it's reasonable to make an overload without
this parameter, which wraps the existing version.
joergoster [Wed, 9 Nov 2016 15:42:27 +0000 (16:42 +0100)]
FEN parsing: add a second check for correctly setting e.p. square
Currently, we only check if there is a pawn in place
to make the en-passant capture. Now also check that
there is a pawn that could just have advanced two
squares. Also update the corresponding comment.
This makes the parsing of FENs a bit more robust, and
now correctly handles positions like the one reported by Dann Corbit.
position fen rnbqkb1r/ppp3pp/3p1n2/3P4/8/2P5/PP3PPP/RNBQKB1R w KQkq e6
d
+---+---+---+---+---+---+---+---+
| r | n | b | q | k | b | | r |
+---+---+---+---+---+---+---+---+
| p | p | p | | | | p | p |
+---+---+---+---+---+---+---+---+
| | | | p | | n | | |
+---+---+---+---+---+---+---+---+
| | | | P | | | | |
+---+---+---+---+---+---+---+---+
| | | | | | | | |
+---+---+---+---+---+---+---+---+
| | | P | | | | | |
+---+---+---+---+---+---+---+---+
| P | P | | | | P | P | P |
+---+---+---+---+---+---+---+---+
| R | N | B | Q | K | B | | R |
+---+---+---+---+---+---+---+---+
Fen: rnbqkb1r/ppp3pp/3p1n2/3P4/8/2P5/PP3PPP/RNBQKB1R w KQkq - 0 1
Fix undefined behaviour with unaligned loads in syzygy code
Casting a pointer to a different type with stricter alignment
requirements yields to implementation dependent behaviour.
Practicaly everything is fine for common platforms because the
CPU/OS/compiler will generate correct code, but anyhow it is
better to be safe than sorry.
Testing with dbg_hit_on() shows that the unalignment accesses are
very rare (below 0.1%) so it makes sense to split the code in a
fast path for the common case and a slower path as a fallback.
Small fixes for compilation with sanitize=yes optimize=no,
by always adding -fsanitize=undefined to the LDFLAGS as required.
Updates config-sanity to check&report the status of the flag.
Marco Costalba [Sat, 16 Jul 2016 06:10:45 +0000 (08:10 +0200)]
Rewrite syzygy in C++
Rewrite the code in SF style, simplify and
document it.
Code is now much clear and bug free (no mem-leaks and
other small issues) and is also smaller (more than
600 lines of code removed).
All the code has been rewritten but root_probe() and
root_probe_wdl() that are completely misplaced and should
be retired altogheter. For now just leave them in the
original version.
Code is fully and deeply tested for equivalency both in
functionality and in speed with hundreds of games and
test positions and is guaranteed to be 100% equivalent
to the original.
Tested with tb_dbg branch for functional equivalency on
more than 12M positions.
stockfish.exe bench 128 1 16 syzygy.epd
Position: 2016/2016
Total 12121156 Hits 0 hit rate (%) 0
Total time (ms) : 4417851
Nodes searched : 1100151204
Nodes/second : 249024
Tested with 5,000 games match against master, 1 Thread,
128 MB Hash each, tc 40+0.4, which is almost equivalent
to LTC in Fishtest on this machine. 3-, 4- and 5-men syzygy
bases on SSD, 12-moves opening book to emphasize mid- and endgame.
Score of SF-SyzygyC++ vs SF-Master: 633 - 617 - 3750 [0.502] 5000
ELO difference: 1
Avoid shifting negative signed integers and use typed
enum to avoids decrementing a variable beyond its defined
range, like:
for (Rank r = RANK_8; r >= RANK_1; --r)
Changes were tested individually and passed SPRT[-3, 1].
syzygy [Wed, 26 Oct 2016 21:01:11 +0000 (23:01 +0200)]
Output PV if last iteration does not complete
Instead of outputting "info nodes ... time ..." when the last
iteration is interrupted, simply call UCI::pv() to output the PV.
I thought about calling UCI:pv() with bounds -VALUE_INFINITE, VALUE_INFINITE
to avoid "lowerbound" or "upperbound" appearing in it, but I'm not sure that
would be any better.
This patch fixes rare inconsistencies between the first move of
the last PV output and the bestmove played. It also makes sure
that all the latest statistics are sent to the GUI (not only nodes
and time but also nps, tbhits, hashfull).
ajithcj [Sat, 15 Oct 2016 07:29:24 +0000 (07:29 +0000)]
Remove useless assignments to currentMove
We reference (ss-1)->currentMove, i.e. we peek
current move of the parent node, so currentMove
should be valid in the main move loop, when we
search() the subtree, but outside of main loop
it is useless.
Jacques [Sat, 8 Oct 2016 10:15:43 +0000 (12:15 +0200)]
Fixes for ARM compilation: take 2
The target:
Odroid U3 (http://www.hardkernel.com/main/products/prdt_info.php?g_code=g138745696275)
Debian Jessie
As listed in #550 and #638 three modifications are needed for compilation to work:
float-abi flag for GCC If an FPU is present and supported by the installed os then passed value need to be hard.
I didn't find any better solution than using readelf to check for the availibilty of Tag_ABI_VFP_args which sould indicate support for the FPU. The check is only done if the arch is arm and if readelf is not present
on the system, there will be an error (/bin/sh: 1: readelf: not found) but it will not break and will continue with the default softfp value. Outputing the error is not really acceptable but I wanted some feedback on the
check itself.
-lpthread is needed on armv7 outside of Android
I replaced UNAME with KERNEL and OS to allow to differentiate Android.
m32 flag
My understanding is that outside of Android the flag is generating errors on armv7.
These modifications should introduce change only for non Android armv7 build.
Jacques [Sat, 8 Oct 2016 10:15:43 +0000 (12:15 +0200)]
Fixes for ARM compilation
The target:
Odroid U3 (http://www.hardkernel.com/main/products/prdt_info.php?g_code=g138745696275)
Debian Jessie
As listed in #550 and #638 three modifications are needed for compilation to work:
float-abi flag for GCC If an FPU is present and supported by the installed os then passed value need to be hard.
I didn't find any better solution than using readelf to check for the availibilty of Tag_ABI_VFP_args which sould indicate support for the FPU. The check is only done if the arch is arm and if readelf is not present
on the system, there will be an error (/bin/sh: 1: readelf: not found) but it will not break and will continue with the default softfp value. Outputing the error is not really acceptable but I wanted some feedback on the
check itself.
-lpthread is needed on armv7 outside of Android
I replaced UNAME with KERNEL and OS to allow to differentiate Android.
m32 flag
My understanding is that outside of Android the flag is generating errors on armv7.
These modifications should introduce change only for non Android armv7 build.
atumanian [Thu, 6 Oct 2016 17:55:10 +0000 (20:55 +0300)]
Optimisation of Position::see and Position::see_sign
Stephane's patch removes the only usage of Position::see, where the
returned value isn't immediately compared with a value. So I replaced
this function by its optimised and more specific version see_ge. This
function also supersedes the function Position::see_sign.
bool Position::see_ge(Move m, Value v) const;
This function tests if the SEE of a move is greater or equal than a
given value. We use forward iteration on captures instread of backward
one, therefore we don't need the swapList array. Also we stop as soon
as we have enough information to obtain the result, avoiding unnecessary
calls to the min_attacker function.
Speed tests (Windows 7), 20 runs for each engine:
Test engine: mean 866648, st. dev. 5964
Base engine: mean 846751, st. dev. 22846
Speedup: 1.023
VoyagerOne [Mon, 3 Oct 2016 13:36:40 +0000 (09:36 -0400)]
Allow inCheck pruning
This is a bit tricky because we don't want
to prune the only legal evasions, even if
with negative SEE. So add an assert to avoid
this subtle bug to slip in later.
Correct pinners calculation and fix bug with pinned
pieces giving check. With this patch 'pinners' only
returns sliders with exactly one defensive piece between
the slider and the attacked square (in other words, pinners
returns exact pinners).
This was a co-operation between Marco Costalba,
Stéphane Nicolet and me.
Special thanks to Ronald de Man for reporting the bug with
pinned pieces giving check, discussed here:
https://groups.google.com/forum/?fromgroups=#!topic/fishcooking/S_4E_Xs5HaE
Swap mg and eg in internal representation of Score
Instrumentation shows that in make_score(mg, eg) calls, the mg value is
zero in 25,9% of the calls while the eg value is zero in 36,8% of the
calls.
Swapping the internal fields of mg and eg in the internal
representation of Score allows the compiler to optimize away the shift
in (eg << 16) + mg in more cases, thus resulting in a 0.3% speed-up
overall.
Rescales the king danger variables in evaluate_king() to
suppress the KingDanger[] array. This avoids the cost of
the memory accesses to the array and simplifies the non-linear
transformation used.
Full credits to "hxim" for the seminal idea and implementation,
see pull request #786.
https://github.com/official-stockfish/Stockfish/pull/786
Marco Costalba [Mon, 29 Aug 2016 07:11:20 +0000 (09:11 +0200)]
Use per-thread counterMoveHistory
Drops a scalability bottleneck due to memory contention
of a single shared table across threads. The effect starts
to be sensible with a high number of threads. Specifically
we have a small regression with 7 threads both at 60 and
180 seconds TC:
We have a regression because counterMoveHistory table is
quite big and it takes time for a single thread to fill it.
Sharing the table yields to a higher fill rate and better
quality of moves and up to 7 threads the benefits of sharing
more then compensate the loss in speed due to contention.
Interestingly even with a 3X longer TC, so with more time
for the single thread to catch up, the improvment is quite
limited and below noise level. It seems we really need much
longer TC to saturate the table.
When we move to high threads number it's another story:
As expected the speed-up more than compensates the filling
rate, and we expect that with tournament TC, where single
thread is able to saturate the table, the difference will
be even stronger. For instance for TCEC 9 super-final time
control will be 180 minutes + 15 seconds and this scalability
improvement seems definitely the way to go.
So, summarizing:
GOOD:
Measured big improvement in high core scenario
Suitable for TCEC 9 superfinal (big hardware, very long TC)
Consistent and natural patch that extends to counterMoveHistory
what we already do for remaining history tables, that are all per-thread
Non functional change for the common case of a single core
Very simple (just 6 lines modified, no added ones)
BAD:
Small regression (within 2-3 ELO) with few threads and short TC
This is the most compact and neatest version
is was able to produce.
On normal builds I have a small slowdown:
normal builds base vs. simplification (gcc 4.8.1 Win7-64 i7-3770 @ 3.4GHz x86-64-modern)
Results for 20 tests for each version:
Base Test Diff
Mean 19747441969333 5411
StDev 11825 10281 5874
p-value: 0,178
speedup: -0,003
On pgo-builds however I measure a nice 1.1% speedup
pgo-builds base vs. simplification
Results for 20 tests for each version:
Base Test Diff
Mean 19741191995444 -21325
StDev 8703 5717 4623
p-value: 1
speedup: 0,011
Don't allow pinned pieces to attack the exchange-square as long all
pinners (this includes also potential ones) are on their original
square.
As soon a pinner moves to the exchange-square or get captured on it, we
fall back to standard SEE behaviour.
This correctly handles the majority of cases with absolute pins.
In evaluate, we start by initializing the pos.psq_score
and adding the material imbalance. After that, we check
whether a specialized eval exists and if yes we return
that value and discard whatever we have computed until now.
It sounds more logical to first probe material entry and
return if we have a specialized eval, and only if it is
not the case initialize eval with some values. There is
no measurable speed-difference on my computer.