Marco Costalba [Sun, 16 Feb 2014 11:20:37 +0000 (12:20 +0100)]
Increase MAX_PLY from 100 to 120
Under some very rare case 100 plies of search
could be not enough. Increasing more could lead
to crashes due to reached stack size limit on
some platforms.
Marco Costalba [Sun, 16 Feb 2014 10:37:29 +0000 (11:37 +0100)]
Fix material key for King
Currently king has no material key associated because
it can never happen to find a legal position without
both kings, so there is no need to keep track of it.
The consequence is that a position with only the two
kings has material key set at zero and if the material
hash table is empty any entry will match and this is
wrong.
Normally bug is hidden becuase the checking for a draw
with pos.is_draw() is done earlier than evaluate() call,
so that we never check in gameplay the material key of a
position with two kings.
Nevertheless the bug is there and can be reproduced setting
at startup a position with only two kings and typing
'eval' from prompt.
The fix is very simple: add a random key also for the king.
Also fixed the condition in material.cpp to avoid asserting
when a 'just 2 kings' postion is evaluated.
Marco Costalba [Sat, 15 Feb 2014 21:17:58 +0000 (22:17 +0100)]
Restore PorbCut name
Actually MultiCut is too different from current scheme.
Note that neither ProbCut is exactly what we do because
we try just a handful of captures instead of all moves,
nevertheless it seems more in line with what we do.
Joerg Oster [Thu, 13 Feb 2014 22:44:12 +0000 (23:44 +0100)]
Return static eval when reaching MAX_PLY
Makes more sense than returning a draw score. Tested
with reduced MAX_PLY = 30 and passed both short TC
LLR: 2.95 (-2.94,2.94) [-1.50,4.50]
Total: 17434 W: 3345 L: 3194 D: 10895
And long TC
LLR: 2.97 (-2.94,2.94) [0.00,6.00]
Total: 2610 W: 488 L: 373 D: 1749
With current limit of MAX_PLY = 100 the patch should not
introduce any measurable change, nevertheless is the correct
approach.
Idea of returning eval is from Michel Van den Bergh.
Marco Costalba [Fri, 14 Feb 2014 09:43:37 +0000 (10:43 +0100)]
Fix magic boosters conversion
Fix small overflow error while converting
magic boosters from right rotate to left rotate,
in particular booster 38 was converted to 4122
instead of the corrcet value 26.
Marco Costalba [Wed, 12 Feb 2014 13:47:36 +0000 (14:47 +0100)]
Move magic random to RKISS
When initializing the magic numbers used to
compute sliding attacks, we endless generate a
random and test it as a possible magic.
In the general case this takes a lot of iterations,
but here, insteaad of picking a casual random, we
rotate it a couple of times and generate a number that
we know has a good probability to be a magic candidate.
This is becuase the quantities by which we rotate the
number are known in advance to produce quickly a good
canidate.
The patch, inspired by DON, just moves the shuffle to RKISS
changing the boosters to take in account a left rotation
instead of a right rotation as in the original.
Marco Costalba [Sat, 8 Feb 2014 12:07:57 +0000 (13:07 +0100)]
Don't fear races when are harmless
Actually race conditions do exist in an engine, just
think for a moment to TT concurrent access. Racy code
is not a problem per se, if the consequences are well
known and correctly handled.
In case of TT access we ensure that the TT move is validated
before to be tried, here we just retry the same move in less
that 1 case out of a million: this is totally harmless considering
that very probably the second time the move is tried we get
immediately a TT hit and search quickly returns.
Lucas Braesch [Tue, 4 Feb 2014 07:18:19 +0000 (08:18 +0100)]
Better document null search window
Hopefully this patch makes the code more:
* Self-documenting: Null search is always a zero window search,
because it is testing for a fail high. It should never be done
on a full window! The current code only works because we don't
do it at PV nodes, and therefore (alpha, beta) = (beta-1, beta):
that's the kind of "clever" trick we should avoid.
* Idiot-proof: If we want to enable null search at PV nodes, all we
need to do now is comment out the !PvNode condition. It's that simple!
In theory, null search should not be done at PV nodes, because PV nodes
should never fail high. But in practice, they DO fail high, because of
aspiration windows, and search inconsistencies, for example. So it makes
sense to keep that flexibility in the code.
Lucas Braesch [Mon, 3 Feb 2014 01:41:32 +0000 (09:41 +0800)]
Better document razoring
Use ralpha instead of rbeta
* rbeta is confusing people. It took THREE attempts to code razoring
at PV nodes correctly in a recent test, because of the rbeta trick.
Unnecessary tricks should be avoided.
* The more correct and self-documenting way of doing this, is to say
that we use a zero window around alpha-margin, not beta-margin.
The fact that, because we only do it at PV nodes, alpha happens to be
beta-1 and that the current stuff with rbeta works, may be correct,
but is confusing.
Remove the misleading and partially erroneous comment about returning
v + margin:
* comments should explain what the code does, not what it could have done.
* this comment is partially wrong in saying that v+margin is "logical",
and that it is "surprising" that is doesn't work.
From a theoretical perspective, at least 3 ways of doing this are equally
defendable:
1/ fail hard: return alpha: The most conservative. We bet that the search
will fail low, but we don't know by how much and don't want to take risks.
2/ aggressive fail soft: return v (what the current code does). This
corresponds to normal fail soft, with the added assumption that we don't
care about the reduction effect (see below point 3/)
3/ conservative fail soft: return v + margin. If the reduced search (qsearch)
gives us a score <= v, we bet that the non reduced search will give us a
score <= v + margin.
* Saying that 2/ is "logical" implies that 1/ and 3/ are not, which is
arguably wrong. Besides, experimental results tell us that 2/ beats 3/,
and that's not something we can argue against: experimental results are
the only trusted metric.
* Also, with the benefit of hindsight, I don't think the fact that 2/ is
better than 3/ is surprising at all. The point is that it is YOUR turn to
move, and you are assuming that by NOT playing (and letting the opponent
capture your hanging pieces in QS) you cannot generally GAIN razor_margin(depth).
Uri Blass [Mon, 27 Jan 2014 19:08:31 +0000 (20:08 +0100)]
Reduce VALUE_KNOWN_WIN to 10000
With some positions like
8/8/8/2p2K2/1pp5/br1p1b2/2p2r2/qqkqq3 w - -
The eval score is higher than VALUE_INFINITE because
is the sum of VALUE_KNOWN_WIN plus a big material
advantage. This leads to an assert. Here are the
steps to reproduce:
Compile SF with debug=yes then do
./stockfish
position fen 8/8/8/2p2K2/1pp5/br1p1b2/2p2r2/qqkqq3 w - -
go depth 1
This patch fixes the issue in this case, but do exsist
other positions for which the patch is not enough and
we will need to limit the eval score to be sure not
overflow the limit.
Note that is not possible to increase the value of
VALUE_INFINITE because should remain within int16_t
type to be stored in a TT entry.
Instead of a fixed reduction of ONE_PLY, now
Null move dynamic reduction based on value can
grow larger in case we are above beta of a value
much higher then PawnValueMg.
Note that now an eval returning VALUE_KNOWN_WIN
makes null search to drop in qsearch.
Passed both short TC:
LLR: 2.95 (-2.94,2.94) [-1.50,4.50]
Total: 26141 W: 4871 L: 4699 D: 16571
And long TC:
LLR: 2.97 (-2.94,2.94) [0.00,6.00]
Total: 33695 W: 5309 L: 5056 D: 23330
Marco Costalba [Sat, 18 Jan 2014 17:19:16 +0000 (18:19 +0100)]
Increase max hash size to 16GB
TCEC season 3, which is due to start in a few weeks, just
had its server upgraded to 64GB RAM and will therefore allow
16GB hash to be used per engine.
This is almost the upper limit without changing the
type of size and hashMask. After this we need to
move to uint64_t instead of uint32_t.
Chris Cain [Thu, 16 Jan 2014 21:50:08 +0000 (21:50 +0000)]
Simplify pawnless endgame evaluation
Retire KmmKm evaluation function. Instead give a very drawish
scale factor when the material advantage is small and not much
material remains.
Retire NoPawnsSF array. Pawnless endgames without a bishop will
now be scored higher. Pawnless endgames with a bishop pair will
be scored lower. The effect of this is hopefully small.
Consistent results both at short TC (fixed games):
ELO: -0.00 +-2.1 (95%) LOS: 50.0%
Total: 40000 W: 7405 L: 7405 D: 25190
And long TC (fixed games):
ELO: 0.77 +-1.9 (95%) LOS: 78.7%
Total: 39690 W: 6179 L: 6091 D: 27420
When we have a fail-high of a quiet move, store it in
a Followupmoves table indexed by the previous move of
the same color (instead of immediate previous move as
is in countermoves case).
Then use this table for quiet moves ordering in the same
way we are already doing with countermoves.
These followup moves will be tried just after countermoves
and before remaining quiet moves.
Passed both short TC
LLR: 2.95 (-2.94,2.94) [-1.50,4.50]
Total: 10350 W: 1998 L: 1866 D: 6486
And long TC
LLR: 2.95 (-2.94,2.94) [0.00,6.00]
Total: 14066 W: 2303 L: 2137 D: 9626
Marco Costalba [Sun, 12 Jan 2014 21:32:41 +0000 (22:32 +0100)]
Retire KBBKN endgame
As pointed out by Joona, Lucas and otehr people in
the forum, this endgame is not a known, there are many
positions where it takes more than 50 moves to claim the
win and becasue exact rules is not possible better to
retire and allow the search to workout the endgame for us.
In case ply is very high, function will round
to zero (although mathematically it is always
bigger than zero). On my system this happens at
movenumber 6661.
Although 6661 moves in a game is, of course,
probably impossible, for safety and to be locally
consistent makes sense to ensure returned value
is positive.
Marco Costalba [Wed, 1 Jan 2014 12:35:11 +0000 (13:35 +0100)]
Further simplify move_importance()
Function move_importance() is already always
positive, so we don't need to add a constant
term to ensure it.
Becuase move_importance() is used to calculate
ratios of a linear combination (as explained in
previous patch), result is not affected. I have
also verified it directly.
Drop a useless parameter. This works because ratio1 and ratio2
are ratios of linear combinations of thisMoveImportance and
otherMovesImportance and so the yscale cancels out.
Therefore the values of ratio1 and ratio2 are independent
of yscale and yscale can be retired.
The same applies to yshift, but here we want to ensure
move_importance() > 0, so directly hard-code this safety
guard in function definition.
Actually there are some small differences due to rounding errors
and usually are at most few millisecond, that's means below 1% of
returned time, apart from very short intervals in which a difference
of just 1 msec can raise to 2-3% of total available time.
Marco Costalba [Wed, 1 Jan 2014 09:50:30 +0000 (10:50 +0100)]
Rename pawn chain to connected
The flag raises also in case of a pawn duo, i.e.
when we have two adjacent pawns on the same rank,
and not only in case of a chain, i.e. when the two
pawns are on a diagonal line.
See this for a reference:
http://en.wikipedia.org/wiki/Connected_pawns
Ralph Stößer [Mon, 9 Dec 2013 07:02:49 +0000 (08:02 +0100)]
Research at intermediate depth if LMR is very high
After a fail high in LMR, if reduction is very high do
a research at lower depth before teh full depth one.
Chances are that the re-search will fail low and the
full depth one is skipped.
Passed both short TC:
LLR: 2.95 (-2.94,2.94) [-1.50,4.50]
Total: 11363 W: 2204 L: 2069 D: 7090
And long TC:
LLR: 2.95 (-2.94,2.94) [0.00,6.00]
Total: 7292 W: 1195 L: 1061 D: 5036
Chris Caino [Tue, 3 Dec 2013 21:58:39 +0000 (21:58 +0000)]
Broader condition for dangerous pawn moves
Instead of a passed pawn now we just require the pawn to
be in the opponent camp to be considered a dangerous
move. Added some renaming to reflect the change.
Passed both short TC test
LLR: 2.95 (-2.94,2.94) [-1.50,4.50]
Total: 10358 W: 2033 L: 1900 D: 6425
And long TC
LLR: 2.95 (-2.94,2.94) [0.00,6.00]
Total: 21459 W: 3486 L: 3286 D: 14687