Marco Costalba [Tue, 29 Sep 2009 09:14:09 +0000 (10:14 +0100)]
Print RootMoveList startup scoring
This satisfies a specific user request of 28/8/2009
"The only issue I have is that during multiPV analysis, the depth 1
best move score is not reported by the engine (reporting for the best
move begins at depth 2). I need it at depth 1 also. Would it be
possible to make this modification in future versions? This would be
of great help as otherwise I will have to use a lesser engine.
The goal of my project is to calculate the ELO performance in a game
and also the ELO rating of individual moves. For this I need depth 1
scores for lower rated performances. I intend to distribute the program
for free upon completion.
Thanks, Jack Welbourne"
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 28 Sep 2009 11:27:05 +0000 (12:27 +0100)]
Retire compute_weight() in evaluation.cpp
Is used only in weight_option() so inline there.
Unroll color loop also for evaluate_space() and
finally also some assorted code style fixes.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 28 Sep 2009 09:46:55 +0000 (10:46 +0100)]
Unroll color loops in evaluate
Use templates to manually unroll the loops so that
many values could be calculated at compile time or at
runtime but with a fast direct memory access instead of
an indirect one.
This change gives a speed up of 3.5 % on pgo build !!! :-)
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 27 Sep 2009 06:58:28 +0000 (07:58 +0100)]
Change back file mode of misc.cpp
It was erroneusly changed by
6bf22f35 from
mode 100644 to 100755.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 26 Sep 2009 05:14:12 +0000 (07:14 +0200)]
Update piece list iteration also in evaluate_pieces()
Move to what we already do in generate_piece_moves()
This simple patch gives a spped up of 1.4% !!
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 26 Sep 2009 04:25:16 +0000 (06:25 +0200)]
Retire faked Windows version of gettimeofday()
Use equivalent Windows function _ftime() instead.
This patch also removes two long standing warnings
under MSVC.
No functional change and no change for non-Windows systems.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 20:45:32 +0000 (21:45 +0100)]
Micro optimization of generate_piece_moves()
This patch make the piece list always terminated by SQ_NONE,
so that we can use a simpler and faster loop in move
generation.
Speedup is about 0.6%.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 16:47:03 +0000 (17:47 +0100)]
Retire kingSquare[] array
It is redundant. Use pieceList[c][KING][0] instead.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 16:11:29 +0000 (17:11 +0100)]
Reorder data layout and optimize access patern
With this very simple patch we get a speed boost
of 0.8% on my PC !
Sometime we find the most complex tricks to increase speed
when instead the best results come from the simplest solutions.
No functional change of course ;-)
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 13:06:55 +0000 (15:06 +0200)]
Fix a couple of Intel compiler warnings
And avoid calculating emptySquares for pawns captures
case.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 12:55:44 +0000 (14:55 +0200)]
Fix a piece_of_color_and_type() / pieceS_of_color_and_type() typo
Bug introduced in
17c51192
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 10:29:10 +0000 (11:29 +0100)]
Rename generate_piece_moves() in generate_piece_evasions()
A better and more specific name. Also a bit of code reshuffle.
Verified No functional change and No performance change
for the whole series.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 10:18:04 +0000 (11:18 +0100)]
Retire generate_pawn_captures()
And unify in generate_pawn_noncaptures() renamed
generate_pawn_moves()
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 09:47:11 +0000 (10:47 +0100)]
Retire generate_pawn_blocking_evasions()
And unify in generate_pawn_noncaptures()
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 23 Sep 2009 08:40:33 +0000 (09:40 +0100)]
Standardize generate_pawn_blocking_evasions()
Rewrite in the form normally used in other similar
functions like generate_pawn_noncaptures()
This allow an easier reading of the pawn moves generators
and simplify a bit the code.
No functional change (tested on more then 100M nodes).
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 21 Sep 2009 17:07:11 +0000 (18:07 +0100)]
Code style and subtle fix in move_is_legal()
A bunch of trivial code style and comment fixes.
Among them there is a real fix for a subtle case
involving promotion moves.
We currently check that a pawn push to 8/1th rank
must be a promotion, but we don't check the contary,
i.e. that a pawn push on a different rank must NOT be
a promotion. Note that, funny enough, we perform this
control for all the other pieces, but not for the pawns!
This patch fixes this really corner case.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 21 Sep 2009 10:54:25 +0000 (11:54 +0100)]
Simplify move legality check for uncommon cases
Remove a bunch of difficult and tricky code to test
legality of castle and ep moves and instead use a slower
but simpler check against the list of generated legal moves.
Because these moves are very rare the performance impact
is small but code semplification is ver big: almost 100 lines
of difficult code removed !
No functionality change. No performance change (strangely enough
there is no even minimal performance regression in pgo builds but
instead a slightly and unexpected increase).
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 21 Sep 2009 09:58:25 +0000 (10:58 +0100)]
Enable functionality of previous patch
Now under-promotion checks are generated.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 21 Sep 2009 09:00:33 +0000 (10:00 +0100)]
When generating checks add possibly under-promotions
In qsearch at depth 0 we generate only captures and checks.
Queen promotion moves are generated among the captures, but
under-promotion moves (both captures and non-captures) are
never generated even if they could give a discovery check.
This patch fixes this limitation extending generate_pawn_noncaptures()
to generate also check moves when required.
Apart for adding the (rare) case of an under-promotion that gives
discovery check, the patch is also a good cleanup because removes
generate_pawn_checks() altoghter.
This patch does the code clean-up but not enables the functional
change so to allow an easier debug.
No functional change and no performance change (actually a very
very small speed increase).
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 21 Sep 2009 07:47:47 +0000 (08:47 +0100)]
Fix a bug in generate_piece_checks()
We are generating also king moves that give check !
Of course these moves are illegal so are in any case
filtered out in MovePicker. Neverthless we should avoid
to generate them.
Also simplify a bit the code.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 21 Sep 2009 06:55:26 +0000 (07:55 +0100)]
Small micro optimization in generate_evasions()
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 19:13:24 +0000 (20:13 +0100)]
Change evaluation GrainSize from 4 to 8
Idea from Joona.
After 999 games at 1+0 on my Intel Core 2 Duo
Orig - Mod: +215 =538 -226 (+11 ELO)
On Joona QUAD after 845 games at 1+0
Orig - Mod: 151 - 181 - 513 (+13 elo)
So it seems a good change !
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Fri, 18 Sep 2009 08:32:57 +0000 (10:32 +0200)]
Save static evaluation also for failed low nodes
When a node fails low and bestValue is still equal to
the original static node evaluation, then save this
in TT along with usual info.
This will allow us to avoid a future costly evaluation() call.
This patch extends to failed low nodes what we already do
for failed high ones.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 18:39:54 +0000 (19:39 +0100)]
Revert evaluation drift
Still not clear if it helps and, especially, how it
helps. So revert for now to avoid any influence on
future feature now under test.
With this patch we come back to be functional
equivalent to patch
e33c94883 F_53.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 19 Sep 2009 16:34:42 +0000 (18:34 +0200)]
Evaluation drift: add always 7 instead of ply
After 828 games at 1+0
Mod vs Orig +191 =447 -190 50.06% 414.5/828
So almost no difference. Patch is committed more for
documentation purposes then for other reasons.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 13:23:46 +0000 (14:23 +0100)]
Rename piece_attacks_from() in attacks_from()
It is in line with attackers_to() and is shorter and
piece is already redundant because is passed as template
parameter anyway.
Integrate also pawn_attacks_from() in the attacks_from()
family so to have an uniform attack info API.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 10:00:21 +0000 (11:00 +0100)]
Remove undefined pinned_pieces(Color c, Bitboard& p)
It was added in revision
5f142ec2 but never used.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 09:47:59 +0000 (10:47 +0100)]
Retire attackers_to(Square s, Color c)
Use the definition in the few places where is needed.
As a nice side effect there is also an optimization in
generate_evasions() where the bitboard of enemy pieces
is computed only once and out of a tight loop.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 09:26:54 +0000 (10:26 +0100)]
Rename piece_attacks() in piece_attacks_from()
It is a bit longer but much easier to understand especially
for people new to the sources. I remember it was not trivial
for me to understand the returned attack bitboard refers to
attacks launched from the given square and not attacking the
given square.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 08:43:28 +0000 (09:43 +0100)]
Cleanup piece_attacks_square() functions
Most of them are not required to be public and are
used in one place only so remove them and use its
definitions.
Also rename piece_attacks_square() in piece_attacks()
to be aligned to the current naming policy.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 08:31:48 +0000 (09:31 +0100)]
Rename attacks_to() in attackers_to()
These functions return bitboard of attacking pieces,
not the attacks themselfs so reflect this in the name.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 07:59:18 +0000 (08:59 +0100)]
Change pawn_attacks() API
Instead of pawn_attacks(Color c, Square s) define as
pawn_attacks(Square s, Color c) to be more aligned to
the others attack info functions.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 07:43:25 +0000 (08:43 +0100)]
Clean up API for attack information
Remove undefined functions sliding_attacks() and ray_attacks()
and retire square_is_attacked(), use the corresponding definition
instead. It is more clear that we are computing full attack
info for the given square.
Alos fix some obsolete comments in move generation functions.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 20 Sep 2009 06:04:22 +0000 (07:04 +0100)]
Move kingSquare[] array to StateInfo
This avoids to reverting back when undoing the move.
No functional change. No performance change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Fri, 18 Sep 2009 10:12:22 +0000 (12:12 +0200)]
Don't compensate TT for evaluation drift
It seems that it works better without compensation
of drifted value when saving static evaluation in TT.
After 818 games at 1+0
Mod vs Orig +217 =429 -172 52.75% 431.5/818 +19 ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Tue, 15 Sep 2009 07:11:42 +0000 (08:11 +0100)]
Use WIN32_LEAN_AND_MEAN in lock.h
This avoids inclusion of a bunch of not very commonly
used headers from windows.h
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona Kiiski [Wed, 16 Sep 2009 03:52:10 +0000 (06:52 +0300)]
Make static value saved in TT independent from ply
After 963 games at 1+0
Mod vs Orig +246 =511 -206 52.08% 501.0/962 +14 ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Tue, 15 Sep 2009 05:47:27 +0000 (06:47 +0100)]
Evaluation drift
Increase evaluation score with ply.
After 940 games at 1+0
Mod vs Orig +247 =487 -206 +15 ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 13 Sep 2009 15:13:49 +0000 (16:13 +0100)]
Fix semantic of piece_attacks<PAWN>
Return the bitboard with the pawn attacks for both colors
so to be aligned to the meaning of the others piece_attacks<Piece>
templates.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 13 Sep 2009 08:02:20 +0000 (09:02 +0100)]
Indirectly prefetch board[from]
One of the most time critical functions is move_is_check()
and in particular the call to type_of_piece_on(from) in the
switch statement.
This call lookups in board[] array and can be slow if board[from]
is not already cached. Few instructions before in the execution stream,
we check the move for legality with pl_move_is_legal().
This patch changes pl_move_is_legal() to use type_of_piece_on(from)
for checking for a king move so that board[from] is automatically
cached in L1 and ready to be used by the near follower move_is_check()
Another advantage is that the call to king_square(us) in pl_move_is_legal()
is avoided most of the times.
Speed up of this nice and tricky patch is 0.7% !
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 2 Sep 2009 12:19:51 +0000 (14:19 +0200)]
Retire piece_is_slider(PieceType pt)
Is not used in any part of the sources.
No functional change, of course ;-)
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 2 Sep 2009 09:57:38 +0000 (11:57 +0200)]
Second take at unifying bitboard representation access
This patch is built on Tord idea to use functions instead of
templates to access position's bitboards. This has the added advantage
that we don't need fallback functions for cases where the piece
type or the color is a variable and not a constant.
Also added Joona suggestion to workaround request for two types
of pieces like bishop_and_queens() and rook_and_queens().
No functionality or performance change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 31 Aug 2009 15:07:03 +0000 (17:07 +0200)]
Templetize functions to get pieces by type
Use a single template to get bitboard representation of
the position given the type of piece as a constant.
This removes almost 80 lines of code and introduces an
uniform notation to be used for querying for piece type.
No functional change and no performance change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 31 Aug 2009 21:08:15 +0000 (22:08 +0100)]
Set LMRPVMoves to 10 instead of 14
After 934 games at 1+0
Mod vs Orig +228 =493 -213 50.80% 474.5/934 +6 ELO
So it seems not negative and there is also the added
benefit to unify LMRPVMoves use in search_pv() and in
root list.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Tue, 1 Sep 2009 13:49:06 +0000 (15:49 +0200)]
Fix poly values mismerge
I managed to completely mismerge correct values
for QuadraticCoefficientsOppositeColor table :-(
Now it correspond to tuning branch for real.
After 999 games at 1+0
Mod vs Orig +247 =512 -240 50.35% 503.0/999 +2 ELO
So almost no change, but the new values comes from the
same tuning session of the others, so has more sense to
use these ones.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Tord Romstad [Wed, 2 Sep 2009 07:58:15 +0000 (09:58 +0200)]
Bug fix for discovered checks in connected_moves().
Because of a hard-to-spot single-character bug in connected_moves(),
the discovered check code had no effect whatsoever. The condition
in the if (...) statement at the beginning of the code would always
return false.
Thanks to Edsel Apostol for pointing out this bug!
Marco Costalba [Mon, 31 Aug 2009 14:09:52 +0000 (16:09 +0200)]
Retire pieces_of_color_and_type()
It is used mainly in a bunch of inline oneliners
just below its definition. So substitute it with
the explicit definition and avoid information hiding.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 31 Aug 2009 12:28:11 +0000 (14:28 +0200)]
MovePicker: rename number_of_moves() in number_of_evasions()
It is more clear that only in that case the move number is
correct, otherwise is only a partial quantity: the number of
moves of that phase.
In case of PH_EVASIONS instead we have only one phase.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 31 Aug 2009 10:33:44 +0000 (12:33 +0200)]
Use pointers instead of array indices also for badCaptures
To have uniformity with moves array handling.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 31 Aug 2009 08:59:33 +0000 (10:59 +0200)]
Document index[] and pieceList[] are not invariants
Array index[] and pieceList[] are not guaranteed to be
invariant to a do_move() + undo_move() sequence when a
capture move is involved.
The reason is that the captured piece is removed form
the list and substituted with the last one in do_move()
while in undo_move() is added again but at the end of
the list.
Because index[] and pieceList[] are used in move generation
to scan the pieces it means that moves will be generated
in a different order before and after a do_move() + undo_move()
sequence as, for instance, the one in Position::has_mate_threat()
After latest patches, move generation could now be invoked
also by MovePicker c'tor and this explains why order of
picked moves is different if MovePicker object is istantiated
before or after a Position::has_mate_threat() call.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 30 Aug 2009 18:12:08 +0000 (19:12 +0100)]
Workaround a bug in Position::has_mate_threat()
It seems that pos.has_mate_threat() changes the position !
So that calling MovePicker c'tor before or after the
has_mate_threat() call changes the things !
Bug was unhidden by previous patch that makes MovePicker c'tor
to generate, score and sort good captures under some circumstances.
Because scoring the captures is position dependent it seems that
the moves returned by MovePicker are different when c'tor is
called before has_mate_threat()
Of course this is only a workaround because the real bug is still
hidden :-(
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 30 Aug 2009 16:17:44 +0000 (17:17 +0100)]
Skip TT_MOVES phase when possible
If we don't have tt moves to search skip the
useless loop associated with TT_MOVES phase.
Another 1% speed boost that brings this series
to a +6.2% against original revision
595a90df
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 30 Aug 2009 08:42:55 +0000 (09:42 +0100)]
Movepicker: take move's loop out of switch statement
This not only cleans up the code but gives another
speed boost of 1.8%
From revision
595a90dfd0 we have increased pgo compiled binary
speed of a whopping +5.2% without any functional change !!
This is really awsome considering that we have also
cut line count by 25 lines.
Sometime we spend days for getting an extra 1% from move
generation while instead the biggest optimizations come
from anonymous and apparently dull parts of the code.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 29 Aug 2009 19:19:09 +0000 (20:19 +0100)]
Revert "null move reorder" series
Does not seem to improve on the standard, latest results
from Joona after 2040 games are negative:
Orig - Mod: 454 - 424 - 1162
And is more or less the same I got few days ago.
So revert for now.
Verified same functionality of
595a90dfd
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 29 Aug 2009 07:02:30 +0000 (08:02 +0100)]
Convert handling of tt moves and killers to standard form
Use the same way of loop along the move list used for
the others move kinds so to be consistent in get_next_move()
And a bit of the usual clean up too, but just a bit.
It is even a bit (+0.3%) faster now. ;-)
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Thu, 27 Aug 2009 19:22:20 +0000 (20:22 +0100)]
Try null move before captures
Always after TT move but before captures.
This seems a better setup against version before this
patch.
After 999 games at 1+0
Mod - Orig +252 =527 -220 +11 ELO
Unfortunatly it does not seems to improve on the standard
version, with null move outside of movepicker (
595a90df) with
the latest speed-up patches added in.
After 999 games at 1+0
Mod - Standard +244 =506 -249 -2 ELO
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Fri, 28 Aug 2009 06:57:52 +0000 (08:57 +0200)]
Use pointers instead of array indices in MovePicker
This avoids calculating the array entry position
at each access and gives another boost of almost 1%.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Thu, 27 Aug 2009 07:12:51 +0000 (09:12 +0200)]
Change the flow in wich moves are generated and picked
In MovePicker we get the next move with pick_move_from_list(),
then check if the return value is equal to MOVE_NONE and
in this case we update the state to the new phase.
This patch reorders the flow so that now from pick_move_from_list()
renamed get_next_move() we directly call go_next_phase() to
generate and sort the next bunch of moves when there are no more
move to try. This avoids to always check for pick_move_from_list()
returned value and the flow is more linear and natural.
Also use a local variable instead of a pointer dereferencing in a
time critical switch statement in get_next_move()
With this patch alone we have an incredible speed up of 3.2% !!!
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 26 Aug 2009 15:59:58 +0000 (16:59 +0100)]
Disable again null move at depth == OnePly
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona Kiiski [Mon, 24 Aug 2009 17:06:09 +0000 (20:06 +0300)]
Use special null move technique in low depth.
Try good captures before null move when depth < 3 * OnePly.
Use this kind of null move also in Depth == OnePly.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona Kiiski [Mon, 24 Aug 2009 15:08:31 +0000 (18:08 +0300)]
Use nullMove only through MovePicker.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona Kiiski [Mon, 24 Aug 2009 15:00:35 +0000 (18:00 +0300)]
Add Null move support to MovePicker.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Joona Kiiski [Mon, 24 Aug 2009 14:46:03 +0000 (17:46 +0300)]
Create useNullMove local variable
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 26 Aug 2009 13:33:17 +0000 (14:33 +0100)]
Clean killers handling in movepicker
Original patch from Joona with added optimizations
by me.
Great cleanup of MovePicker with speed improvment of 1%
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 24 Aug 2009 16:41:24 +0000 (17:41 +0100)]
Micro-optimze extension()
Explicitly write the conditions for pawn to 7th
and passed pawn instead of wrapping in redundant
helpers.
Also retire the now unused move_is_pawn_push_to_7th()
and the never used move_was_passed_pawn_push() and
move_is_deep_pawn_push()
Function extension() is so time critical that this
simple patch speeds up the pgo compile of 0.5% and
it is also more clear what actually happens there.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 23 Aug 2009 17:57:11 +0000 (18:57 +0100)]
Merge branch 'master' of git-Stockfish@free2.projectlocker.com:sf
Marco Costalba [Sun, 23 Aug 2009 16:20:02 +0000 (17:20 +0100)]
Remove a local variable from pop_1st_bit()
Remove the 'b' uint32_t local variable.
Optimized assembly is more or less the same
(one 'mov' instruction less), but now it is
written in a way more similar to the final assembly
flow so it should be easier for compiler to optimize.
Also guarantee that BitTable[] is always aligned.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 22 Aug 2009 11:10:02 +0000 (12:10 +0100)]
Poly ampli+bias values after 73831 games
Verified correct against tuning branch.
After 999 games at 1+0
Mod vs Orig +257 =510 -232 51.20% +9 ELO
Very small increase but an increase anyway !
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Tord Romstad [Fri, 21 Aug 2009 08:50:34 +0000 (10:50 +0200)]
Added a few new targets to the Makefile for OS X with icpc.
The following new targets were added:
* osx-icc32: 32-bit x86 compiled with icpc.
* osx-icc64: 64-bit x86 compiled with icpc.
* osx-icc32-profile: 32-bit x86 compiled with icpc and pgo.
* osx-icc64-profile: 64-bit x86 compiled with icpc and pgo.
Marco Costalba [Thu, 20 Aug 2009 15:30:34 +0000 (16:30 +0100)]
Fix some asserts raised by is_ok()
There were two asserts.
The first was raised because is_ok() was called at the
beginning of do_castle_move() and this is wrong after
the last code reformatting because at that point the state
is already modified by the caller do_move().
The second, raised by debugIncrementalEval, was due to a
rounding error in compute_value() that occurs because
TempoValueEndgame was updated in an odd number by patch
"Merge Joona Kiiski evaluation tweaks" (
3ed603cd) of 13/3/2009
This line in compute_value() is the guilty one:
result += (side_to_move() == WHITE)? TempoValue / 2 : -TempoValue / 2;
The fix is to increment TempoValueEndgame so to be even.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Tord Romstad [Thu, 20 Aug 2009 14:54:20 +0000 (16:54 +0200)]
Fixed incorrect material key update when making promotion moves.
Marco Costalba [Tue, 18 Aug 2009 15:54:46 +0000 (16:54 +0100)]
More use of memset() in Position::clear()
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Tue, 18 Aug 2009 00:21:01 +0000 (01:21 +0100)]
Little do_move() micro optimizations
Also a few remaining style touches.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 17 Aug 2009 22:12:38 +0000 (23:12 +0100)]
Better clarify how pieceList[] and index[] work
Rearrange the code a bit to be more self-documenting.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 17 Aug 2009 07:57:09 +0000 (08:57 +0100)]
Unify patch series summary
This patch seems bigger then what actually is.
It just moves some code around and adds a bit of coding style fixes
to do_move() and undo_move() so to have uniformity of naming in both
functions.
The diffstat for the whole patch series is
239 insertions(+), 426 deletions(-)
And final MSVC pgo build is even a bit faster:
Before 448.051 nodes/sec
After 453.810 nodes/sec (+1.3%)
No functional change (tested on more then 100M of nodes)
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 16 Aug 2009 16:58:24 +0000 (17:58 +0100)]
Unify undo_ep_move(m)
Integrate undo_ep_move in undo_move() this reduces line count
and code readibility.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 16 Aug 2009 16:07:10 +0000 (17:07 +0100)]
Unify undo_promotion_move()
Integrate do_ep_move in undo_move() this reduces line count
and code readibility.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 16 Aug 2009 13:49:15 +0000 (14:49 +0100)]
Unify do_promotion_move()
Integrate do_promotion_move() in do_move() this reduces line count
and code readibility.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 16 Aug 2009 13:07:34 +0000 (14:07 +0100)]
Unify do_ep_move()
Integrate do_ep_move in do_move() this reduces line count
and code readibility.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 15 Aug 2009 14:18:17 +0000 (15:18 +0100)]
L1/L2 friendly PhaseTable[]
In Movepicker c'tor we access during initialization one of
MainSearchPhaseIndex..QsearchWithoutChecksPhaseIndex globals.
Postpone definition of PhaseTable[] just after them so that
when PhaseTable[] will be accessed later in get_next_move()
it will be already present in L1/L2.
It works like an implicit prefetching of PhaseTable[].
Also shrink PhaseTable[] to fit an L1 cache line of 16 bytes
using uint8_t instead of int.
This apparentely innocuous patch gives an astonish speed
up of 1.6% under MSVC 2010 beta, pgo optimized !
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Fri, 14 Aug 2009 11:47:49 +0000 (12:47 +0100)]
Use optimized pop_1st_bit() under Windows 64 with icc
Intel compiler can handle this code even under Windows.
So lift the costrain.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Thu, 13 Aug 2009 10:45:35 +0000 (12:45 +0200)]
Better naming and document some endgame functions
In particular the generic scaling functions.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Wed, 12 Aug 2009 07:40:03 +0000 (09:40 +0200)]
Finally fix prefetch on Linux
It was due to a missing -msse compiler option !
Without this option the CPU silently discards
prefetcht2 instructions during execution.
Also added a (gcc documented) hack to prevent Intel
compiler to optimize away the prefetches.
Special thanks to Heinz for testing and suggesting
improvments. And for Jim for testing icc on Windows.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Tue, 11 Aug 2009 07:30:19 +0000 (08:30 +0100)]
Reuse 5 slots instead of 4
But this time with the guarantee of an always aligned
access so that prefetching is not adversely impacted.
On Joona PC
1+0, 64Mb hash:
Orig - Mod: 174 - 237 - 359
Instead after 1000 games at 1+0 with 128MB hash size
we are at + 1 ELO (just 4 games of difference).
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 10 Aug 2009 13:23:19 +0000 (14:23 +0100)]
Double prefetch on Windows
After fixing the cpu frequency with RightMark tool I was
able to test speed all the different prefetch combinations.
Here the results:
OS Windows Vista 32bit, MSVC compile
CPU Intecl Core 2 Duo T5220 1.55 GHz
bench on depth 12, 1 thread,
26552844 nodes searched
results in nodes/sec
no-prefetch
402486, 402005, 402767, 401439, 403060
single prefetch (aligned 64)
410145, 409159, 408078, 410443, 409652
double prefetch (aligned 64) 0+32
414739, 411238, 413937, 414641, 413834
double prefetch (aligned 64) 0+64
413537, 414337, 413537, 414842, 414240
And now also some crazy stuff:
single prefetch (aligned 128)
410145, 407395, 406230, 410050, 409949
double prefetch (aligned 64) 0+0
409753, 410044, 409456
single prefetch (aligned 64) +32
408379, 408272, 406809
single prefetch (aligned 64) +64
408279, 409059, 407395
So it seems the best is a double prefetch at the addres + 32 or +64,
I will choose the second one because it seems more natural to me.
It is still a mystery why it doesn't work under Linux :-(
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 10 Aug 2009 10:59:07 +0000 (12:59 +0200)]
Avoid Intel compiler optimizes away prefetching
Without this hack Intel compiler happily optimizes
away the gcc builtin call.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 10 Aug 2009 07:35:46 +0000 (09:35 +0200)]
Use aligned prefetch address
Prefetch always form a chache line boundary. It seems
that if prefetch address is not cache line aligned then
performance is adversely impacted.
Hopefully we will resuse that 32 bits of padding for something
useful in the future.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 10 Aug 2009 06:43:39 +0000 (08:43 +0200)]
Remove old BishopPairBonus constants
Now that we have poly imbalance these ones
are no more used.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 9 Aug 2009 23:20:54 +0000 (01:20 +0200)]
Enable prefetch also for gcc
This fix a compile error under Linux with gcc when
there aren't the intel dev libraries.
Also simplify the previous patch moving TT definition
from search.cpp to tt.cpp so to avoid using passing a
pointer to TT to the current position.
Finally simplify do_move(), now we miss a prefetch in the
rare case of setting an en-passant square but code is
much cleaner and performance penalty is almost zero.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 9 Aug 2009 14:53:51 +0000 (15:53 +0100)]
Try to prefetch as soon as position key is ready
Move prefetching code inside do_move() so to allow a
very early prefetching and to put as many instructions
as possible between prefetching and following retrieve().
With this patch retrieve() times are cutted of another 25%
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 9 Aug 2009 12:44:55 +0000 (13:44 +0100)]
Add TT prefetching support
TT.retrieve() is the most time consuming function
because almost always involves a very slow RAM access.
TT table is so big that is never cached. This patch
prefetches TT data just after a move is done, so that
subsequent TT.retrieve will be very fast.
Profiling with VTune shows that TT:retrieve() times are
almost cutted in half !
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 9 Aug 2009 03:35:46 +0000 (04:35 +0100)]
Use 5 TTEntry slots instead of 4
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sun, 9 Aug 2009 03:19:32 +0000 (04:19 +0100)]
Use 32 bit key in TT
Shrink key to 32 bits instead of 64. To still avoid
collisions use the high 32 bits of position key as the
TT key and the low 32 bits to retrieve the correct
cluster index in the table.
With this patch size og TTentry shrinks to 96 bits instead
of 128 and the cluster of 4 TTEntry sums to 48 bytes instead
of 64.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 8 Aug 2009 16:37:13 +0000 (17:37 +0100)]
Makefile: added 'make strip' target
Binaries are always built with symbol table in to easy
debugging and profiling.
It is now possible to run:
make strip
To remove symbol table from the compiled binary. This
could be useful to prepare the release version.
Patch by Heinz van Saanen.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 8 Aug 2009 16:04:01 +0000 (17:04 +0100)]
Let LMR at root be independent of MultiPV value
Current formula enable LMR when
i + MultiPV >= LMRPVMoves
It means that, for instance, if MultiPV == 1 then LMR
will be started to be considered at move i = LMRPVMoves - 1,
while if MultiPV == 3 then it will start before,
at move i = LMRPVMoves - 3.
With this patch the formula becomes
i >= MultiPV + LMRPVMoves - 2
So that LMR will always start after LMRPVMoves - 1 moves
from the last PV move.
No functional change when MultiPV == 1
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 8 Aug 2009 12:06:50 +0000 (13:06 +0100)]
Speed up polynomial material imbalance loop
Access pos.piece_count() only once and avoid some
branches in the inner loop.
Profiling with VTune shows a 20% speed improvement in
get_material_info(), and it is also a bit more cleaned
up this way ;-)
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 8 Aug 2009 11:28:03 +0000 (12:28 +0100)]
There is no need to special case KNNK ending
It is always draw, so use the corresponding proper
evaluation function.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Mon, 27 Jul 2009 09:28:29 +0000 (11:28 +0200)]
Move halfOpenFiles[] calculation out of a loop
And put it in an already existing one so to
optimze a bit.
Also additional cleanups and code shuffles
all around the place.
No functional change.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 8 Aug 2009 07:12:31 +0000 (09:12 +0200)]
Compile without DEBUG flag by default
And build also symbol table. It can easily stripped
after .exe is done and it is necessary for profiling.
Signed-off-by: Marco Costalba <mcostalba@gmail.com>
Marco Costalba [Sat, 8 Aug 2009 02:46:43 +0000 (03:46 +0100)]
Revert material balance values after 100000 games
After Joona's direct testing with ~2000 games it seems
values after 100.000 games does not give any advantage,
so revert for now.
Score of Stockfish_0 vs Stockfish_15: 491 - 392 - 1102
Score of Stockfish_0 vs Stockfish_40: 461 - 439 - 1076
Score of Stockfish_0 vs Stockfish_65: 442 - 518 - 1018 (13 elo)
Score of Stockfish_0 vs Stockfish_100: 504 - 502 - 984
Signed-off-by: Marco Costalba <mcostalba@gmail.com>