--- /dev/null
+Usage:
+
+ inigo [ -group [ name=value ]* ]
+ [ -consumer id[:arg] [ name=value ]* ]
+ [ -filter id[:arg] [ name=value ] * ]
+ [ -transition id[:arg] [ name=value ] * ]
+ [ -blank frames ]
+ [ -track ]
+ [ producer [ name=value ] * ]+
+ [ -serialise file.inigo ]
+
+General rules:
+
+ 1. Order is incredibly important;
+ 2. Error checking on command line parsing is weak;
+ 3. This document does not duplicate the information in services.txt.
+
+Terminoligy:
+
+ 'Producers' typically refer to files but may also indicate
+ devices (such as dv1394 input or video4linux). Hence, the more
+ generic term is used [yes, the more generic usage is out of
+ scope for now...].
+
+ 'Filters' are frame modifiers - they always guarantee that for
+ every frame they receive, they output *precisely* one frame.
+ Never more, never less, ever.
+
+ 'Transitions' collect frames from two tracks (a and b) and
+ output 1 modified frame on their 'a track', and 1 unmodified
+ frame on their 'b track'. Never more, never less, ever.
+
+ 'Consumers' collect frames from a producer, do something with
+ them and destroy them.
+
+ Collectively, these are known as 'services'.
+
+ All services have 'properties' associated to them. These are
+ typically defaulted or evaluated and may be overriden on a case
+ by case basis.
+
+ All services except consumers obey in and out properties.
+
+ Consumers have no say in the flow of frames [though they may
+ give the illusion that they do]. They get frames from a
+ connected producer, use them, destroy them and get more.
+
+Basics:
+
+ To play a file with the default SDL PAL consumer, usage is:
+
+ $ inigo file
+
+ Note that 'file' can be anything that inigo has a known
+ 'producer' mapping for (so this can be anything from .dv to
+ .txt).
+
+Properties:
+
+ Properties can be assigned to the producer by adding additional
+ name=value pairs after the producer:
+
+ $ inigo file in=50 out=100 something="something else"
+
+ Note that while some properties have meaning to all producers
+ (for example: in, out and length are guaranteed to be valid for
+ all, though typically, length is determined automatically), the
+ validity of others are dependent on the producer - however,
+ properties will always be assigned, but it doesn't mean they
+ will be used.
+
+Multiple Files:
+
+ Multiple files of different types can be used:
+
+ $ inigo a.dv b.mpg c.png
+
+ Properties can be assigned to each file:
+
+ $ inigo a.dv in=50 out=100 b.mpg out=500 c.png out=500
+
+Filters:
+
+ The Multiple Files examples above will logically playout one
+ after the other.
+
+ However, inigo doesn't care too much about changes in frame
+ dimensions or audio specification, so you may need to add
+ additional normalising filters to that, ie:
+
+ $ inigo a.dv b.mpg c.png -filter resize -filter resample
+
+ These filters are designed to guarantee that the consumer gets
+ what it asks for.
+
+ It should also be stressed that filters are applied in the order
+ in which they're specified.
+
+Filter Properties:
+
+ As with producers, properties may be specified on filters too.
+
+ Again, in and out properties are common to all, so to apply a
+ filter to a range of frames, you would use something like:
+
+ $ inigo a.dv -filter greyscale in=0 out=50
+
+ Again, filters have their own set of rules about properties and
+ will silently ignore properties that do not apply.
+
+Groups:
+
+ The -group switch is provided to force default properties on the
+ following 'services'. For example:
+
+ $ inigo -group in=0 out=49 clip*
+
+ would play the first 50 frames of all clips that match the wild
+ card pattern.
+
+ Note that the last -group settings also apply to the following
+ filters, transitions and consumers, so:
+
+ $ inigo -group in=0 out=49 clip* -filter greyscale
+
+ is *probably not* what you want (ie: the greyscale filter would
+ only be applied to the first 50 frames).
+
+ To shed the group properties, you can use any empty group:
+
+ $ inigo -group in=0 out=49 clip* -group -filter greyscale
+
+Introducing Tracks and Blanks:
+
+ So far, all of the examples have shown the definition of a
+ single playlist, or more accurately, track.
+
+ When multiple tracks exist, the consumer will receive a frame
+ from the 'lowest numbered' track that is generating a non-blank
+ frame.
+
+ It is best to visualise a track arrangement, so we'll start with
+ an example:
+
+ $ inigo a.dv out=49 -track b.dv
+
+ This can be visualised as follows:
+
+ +-------+
+ |a |
+ +-------+----------+
+ |b |
+ +------------------+
+
+ Playout will show the first 50 frames of a and the 51st frame
+ shown will be the 51st frame of b.
+
+ To show have the 51st frame be the first frame of b, we can use
+ the -blank switch:
+
+ $ inigo a.dv out=49 -track -blank 49 b.dv
+
+ Which we can visualise as:
+
+ +-------+
+ |a |
+ +-------+-------------------+
+ |b |
+ +-------------------+
+
+ Now playout will continue as though a and b clips are on the
+ same track (which is about as useful as reversing the process of
+ slicing bread).
+
+Transitions:
+
+ Where tracks become useful is in the placing of transitions.
+
+ Here we need tracks to overlap, so a useful multitrack
+ definition could be given as:
+
+ $ inigo a.dv out=49 -transition luma in=25 out=49 \
+ -track \
+ -blank 24 b.dv
+
+ Now we're cooking - our visualisation would be something like:
+
+ +-------+
+ |a |
+ +----+--+---------------+
+ |b |
+ +------------------+
+
+ Playout will now show the first 25 frames of a and then a fade
+ transition for 25 frames between a and b, and will finally
+ playout the remainder of b.
+
+Reversing a Transition:
+
+ When we visualise a track definition, we also see situtations
+ like:
+
+ +-------+ +----------+
+ |a1 | |a2 |
+ +----+--+--------------+----+-----+
+ |b |
+ +----------------------+
+
+ In this case, we have two transitions, a1 to b and b to a2.
+
+ In this scenario, we define a command line as follows:
+
+ $ inigo a.dv out=49 -blank 49 a2.dv \
+ -transition luma in=25 out=49 \
+ -transition luma in=100 out=124 reverse=1 \
+ -track \
+ -blank 24 b.dv out=99
+
+Filters and Tracks:
+
+ A filter applies to a [specified region of a] single track, so
+ normalisation filters need to be applied to each track when
+ applicable.
+
+ This user specification is a necessary evil (you do not want to
+ resize a text or png overlay to be the size of the frame that
+ the consumer is requesting, and you may not want to unecessarily
+ resize a video track if you will be later rescaling it for
+ composition).
+
+Serialisation:
+
+ Inigo has a built in serialisation mechanism - you can build up
+ your command, test it via any consumer and then add a -serialise
+ file.inigo switch to save it.
+
+ The saved file can be subsequently used as a clip by either
+ miracle or inigo. Take care though - paths to files are saved as
+ provided on the command line....
+
+Missing Features:
+
+ Some filters/transitions should be applied on the output frame
+ regardless of which track it comes from - for example, you might
+ have a 3rd text track or a watermark which you want composited
+ on every frame, and of course, there's the obscure feature....
+
+ A -post switch will be added to provided this feature at some
+ point soon.
+
Not Implemented
------------------------------------------------------------------------------
NLS
-INSERT
-MOVE
-REMOVE
USET points=ignore
USET eof=terminate
XFER
Incorrect Behaviour
------------------------------------------------------------------------------
killall miracle does not work. requires killall -HUP
-STOP does not play the test card (white silence) (=pause)
USTA when stopped reports "paused"
CLEAN removes all clips (as opposed to leaving the currently playing one)
USET eof=pause is partially supported
Different Intentional Behaviour
------------------------------------------------------------------------------
-LOAD commences play
-STOP does not terminate audio/video output
MLT Bugs
--- /dev/null
+Usage:
+
+ inigo [ -group [ name=value ]* ]
+ [ -consumer id[:arg] [ name=value ]* ]
+ [ -filter id[:arg] [ name=value ] * ]
+ [ -transition id[:arg] [ name=value ] * ]
+ [ -blank frames ]
+ [ -track ]
+ [ producer [ name=value ] * ]+
+ [ -serialise file.inigo ]
+
+General rules:
+
+ 1. Order is incredibly important;
+ 2. Error checking on command line parsing is weak;
+ 3. This document does not duplicate the information in services.txt.
+
+Terminoligy:
+
+ 'Producers' typically refer to files but may also indicate
+ devices (such as dv1394 input or video4linux). Hence, the more
+ generic term is used [yes, the more generic usage is out of
+ scope for now...].
+
+ 'Filters' are frame modifiers - they always guarantee that for
+ every frame they receive, they output *precisely* one frame.
+ Never more, never less, ever.
+
+ 'Transitions' collect frames from two tracks (a and b) and
+ output 1 modified frame on their 'a track', and 1 unmodified
+ frame on their 'b track'. Never more, never less, ever.
+
+ 'Consumers' collect frames from a producer, do something with
+ them and destroy them.
+
+ Collectively, these are known as 'services'.
+
+ All services have 'properties' associated to them. These are
+ typically defaulted or evaluated and may be overriden on a case
+ by case basis.
+
+ All services except consumers obey in and out properties.
+
+ Consumers have no say in the flow of frames [though they may
+ give the illusion that they do]. They get frames from a
+ connected producer, use them, destroy them and get more.
+
+Basics:
+
+ To play a file with the default SDL PAL consumer, usage is:
+
+ $ inigo file
+
+ Note that 'file' can be anything that inigo has a known
+ 'producer' mapping for (so this can be anything from .dv to
+ .txt).
+
+Properties:
+
+ Properties can be assigned to the producer by adding additional
+ name=value pairs after the producer:
+
+ $ inigo file in=50 out=100 something="something else"
+
+ Note that while some properties have meaning to all producers
+ (for example: in, out and length are guaranteed to be valid for
+ all, though typically, length is determined automatically), the
+ validity of others are dependent on the producer - however,
+ properties will always be assigned, but it doesn't mean they
+ will be used.
+
+Multiple Files:
+
+ Multiple files of different types can be used:
+
+ $ inigo a.dv b.mpg c.png
+
+ Properties can be assigned to each file:
+
+ $ inigo a.dv in=50 out=100 b.mpg out=500 c.png out=500
+
+Filters:
+
+ The Multiple Files examples above will logically playout one
+ after the other.
+
+ However, inigo doesn't care too much about changes in frame
+ dimensions or audio specification, so you may need to add
+ additional normalising filters to that, ie:
+
+ $ inigo a.dv b.mpg c.png -filter resize -filter resample
+
+ These filters are designed to guarantee that the consumer gets
+ what it asks for.
+
+ It should also be stressed that filters are applied in the order
+ in which they're specified.
+
+Filter Properties:
+
+ As with producers, properties may be specified on filters too.
+
+ Again, in and out properties are common to all, so to apply a
+ filter to a range of frames, you would use something like:
+
+ $ inigo a.dv -filter greyscale in=0 out=50
+
+ Again, filters have their own set of rules about properties and
+ will silently ignore properties that do not apply.
+
+Groups:
+
+ The -group switch is provided to force default properties on the
+ following 'services'. For example:
+
+ $ inigo -group in=0 out=49 clip*
+
+ would play the first 50 frames of all clips that match the wild
+ card pattern.
+
+ Note that the last -group settings also apply to the following
+ filters, transitions and consumers, so:
+
+ $ inigo -group in=0 out=49 clip* -filter greyscale
+
+ is *probably not* what you want (ie: the greyscale filter would
+ only be applied to the first 50 frames).
+
+ To shed the group properties, you can use any empty group:
+
+ $ inigo -group in=0 out=49 clip* -group -filter greyscale
+
+Introducing Tracks and Blanks:
+
+ So far, all of the examples have shown the definition of a
+ single playlist, or more accurately, track.
+
+ When multiple tracks exist, the consumer will receive a frame
+ from the 'lowest numbered' track that is generating a non-blank
+ frame.
+
+ It is best to visualise a track arrangement, so we'll start with
+ an example:
+
+ $ inigo a.dv out=49 -track b.dv
+
+ This can be visualised as follows:
+
+ +-------+
+ |a |
+ +-------+----------+
+ |b |
+ +------------------+
+
+ Playout will show the first 50 frames of a and the 51st frame
+ shown will be the 51st frame of b.
+
+ To show have the 51st frame be the first frame of b, we can use
+ the -blank switch:
+
+ $ inigo a.dv out=49 -track -blank 49 b.dv
+
+ Which we can visualise as:
+
+ +-------+
+ |a |
+ +-------+-------------------+
+ |b |
+ +-------------------+
+
+ Now playout will continue as though a and b clips are on the
+ same track (which is about as useful as reversing the process of
+ slicing bread).
+
+Transitions:
+
+ Where tracks become useful is in the placing of transitions.
+
+ Here we need tracks to overlap, so a useful multitrack
+ definition could be given as:
+
+ $ inigo a.dv out=49 -transition luma in=25 out=49 \
+ -track \
+ -blank 24 b.dv
+
+ Now we're cooking - our visualisation would be something like:
+
+ +-------+
+ |a |
+ +----+--+---------------+
+ |b |
+ +------------------+
+
+ Playout will now show the first 25 frames of a and then a fade
+ transition for 25 frames between a and b, and will finally
+ playout the remainder of b.
+
+Reversing a Transition:
+
+ When we visualise a track definition, we also see situtations
+ like:
+
+ +-------+ +----------+
+ |a1 | |a2 |
+ +----+--+--------------+----+-----+
+ |b |
+ +----------------------+
+
+ In this case, we have two transitions, a1 to b and b to a2.
+
+ In this scenario, we define a command line as follows:
+
+ $ inigo a.dv out=49 -blank 49 a2.dv \
+ -transition luma in=25 out=49 \
+ -transition luma in=100 out=124 reverse=1 \
+ -track \
+ -blank 24 b.dv out=99
+
+Filters and Tracks:
+
+ A filter applies to a [specified region of a] single track, so
+ normalisation filters need to be applied to each track when
+ applicable.
+
+ This user specification is a necessary evil (you do not want to
+ resize a text or png overlay to be the size of the frame that
+ the consumer is requesting, and you may not want to unecessarily
+ resize a video track if you will be later rescaling it for
+ composition).
+
+Serialisation:
+
+ Inigo has a built in serialisation mechanism - you can build up
+ your command, test it via any consumer and then add a -serialise
+ file.inigo switch to save it.
+
+ The saved file can be subsequently used as a clip by either
+ miracle or inigo. Take care though - paths to files are saved as
+ provided on the command line....
+
+Missing Features:
+
+ Some filters/transitions should be applied on the output frame
+ regardless of which track it comes from - for example, you might
+ have a 3rd text track or a watermark which you want composited
+ on every frame, and of course, there's the obscure feature....
+
+ A -post switch will be added to provided this feature at some
+ point soon.
+
Not Implemented
------------------------------------------------------------------------------
NLS
-INSERT
-MOVE
-REMOVE
USET points=ignore
USET eof=terminate
XFER
Incorrect Behaviour
------------------------------------------------------------------------------
killall miracle does not work. requires killall -HUP
-STOP does not play the test card (white silence) (=pause)
USTA when stopped reports "paused"
CLEAN removes all clips (as opposed to leaving the currently playing one)
USET eof=pause is partially supported
Different Intentional Behaviour
------------------------------------------------------------------------------
-LOAD commences play
-STOP does not terminate audio/video output
MLT Bugs
return mlt_service_connect_producer( &this->parent, producer, 0 );
}
+/** Start the consumer.
+*/
+
+int mlt_consumer_start( mlt_consumer this )
+{
+ if ( this->start != NULL )
+ return this->start( this );
+ return 0;
+}
+
+/** Stop the consumer.
+*/
+
+int mlt_consumer_stop( mlt_consumer this )
+{
+ if ( this->stop != NULL )
+ return this->stop( this );
+ return 0;
+}
+
+/** Determine if the consumer is stopped.
+*/
+
+int mlt_consumer_is_stopped( mlt_consumer this )
+{
+ if ( this->is_stopped != NULL )
+ return this->is_stopped( this );
+ return 0;
+}
+
/** Close the consumer.
*/
struct mlt_service_s parent;
// public virtual
+ int ( *start )( mlt_consumer );
+ int ( *stop )( mlt_consumer );
+ int ( *is_stopped )( mlt_consumer );
void ( *close )( mlt_consumer );
// Private data
extern mlt_service mlt_consumer_service( mlt_consumer this );
extern mlt_properties mlt_consumer_properties( mlt_consumer this );
extern int mlt_consumer_connect( mlt_consumer this, mlt_service producer );
+extern int mlt_consumer_start( mlt_consumer this );
+extern int mlt_consumer_stop( mlt_consumer this );
+extern int mlt_consumer_is_stopped( mlt_consumer this );
extern void mlt_consumer_close( mlt_consumer );
#endif
if ( p_alpha )
p_alpha += x_src + y_src * stride_src / 2;
+ uint8_t *p = p_src;
+ uint8_t *q = p_dest;
+ uint8_t *o = p_dest;
+ uint8_t *z = p_alpha;
+
+ uint8_t Y;
+ uint8_t UV;
+ uint8_t a;
+ float value;
+
// now do the compositing only to cropped extents
for ( i = 0; i < height_src; i++ )
{
- uint8_t *p = p_src;
- uint8_t *q = p_dest;
- uint8_t *o = p_dest;
- uint8_t *z = p_alpha;
+ p = p_src;
+ q = p_dest;
+ o = p_dest;
+ z = p_alpha;
for ( j = 0; j < width_src; j ++ )
{
- uint8_t y = *p ++;
- uint8_t uv = *p ++;
- uint8_t a = ( z == NULL ) ? 255 : *z ++;
- float value = ( weight * ( float ) a / 255.0 );
- *o ++ = (uint8_t)( y * value + *q++ * ( 1 - value ) );
- *o ++ = (uint8_t)( uv * value + *q++ * ( 1 - value ) );
+ Y = *p ++;
+ UV = *p ++;
+ a = ( z == NULL ) ? 255 : *z ++;
+ value = ( weight * ( float ) a / 255.0 );
+ *o ++ = (uint8_t)( Y * value + *q++ * ( 1 - value ) );
+ *o ++ = (uint8_t)( UV * value + *q++ * ( 1 - value ) );
}
p_src += stride_src;
#include <stdio.h>
#include <stdlib.h>
+#include <string.h>
/** Private structure.
*/
last = time;
fprintf( stderr, "%d: %lld\n", i, time );
}
+ fprintf( stderr, "Current Position: %lld\n", mlt_producer_position( producer ) );
}
break;
mlt_producer_seek( producer, time );
}
break;
+ case 'H':
+ if ( producer != NULL )
+ {
+ mlt_position position = mlt_producer_position( producer );
+ mlt_producer_seek( producer, position - ( mlt_producer_get_fps( producer ) * 60 ) );
+ }
+ break;
case 'h':
- if ( multitrack != NULL )
+ if ( producer != NULL )
{
mlt_position position = mlt_producer_position( producer );
mlt_producer_set_speed( producer, 0 );
- mlt_producer_seek( producer, position - 1 >= 0 ? position - 1 : 0 );
+ mlt_producer_seek( producer, position - 1 );
}
break;
case 'j':
}
break;
case 'l':
- if ( multitrack != NULL )
+ if ( producer != NULL )
{
mlt_position position = mlt_producer_position( producer );
mlt_producer_set_speed( producer, 0 );
mlt_producer_seek( producer, position + 1 );
}
break;
+ case 'L':
+ if ( producer != NULL )
+ {
+ mlt_position position = mlt_producer_position( producer );
+ mlt_producer_seek( producer, position + ( mlt_producer_get_fps( producer ) * 60 ) );
+ }
+ break;
}
}
}
fprintf( stderr, "+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+\n" );
fprintf( stderr, "+---------------------------------------------------------------------+\n" );
- fprintf( stderr, "| h = previous, l = next |\n" );
+ fprintf( stderr, "| H = back 1 minute, L = forward 1 minute |\n" );
+ fprintf( stderr, "| h = previous frame, l = next frame |\n" );
fprintf( stderr, "| g = start of clip, j = next clip, k = previous clip |\n" );
fprintf( stderr, "| 0 = restart, q = quit, space = play |\n" );
fprintf( stderr, "+---------------------------------------------------------------------+\n" );
// Connect consumer to tractor
mlt_consumer_connect( consumer, mlt_field_service( field ) );
+ // Start the consumer
+ mlt_consumer_start( consumer );
+
// Transport functionality
transport( inigo );
+
+ // Stop the consumer
+ mlt_consumer_stop( consumer );
}
else if ( store != NULL )
{
fprintf( stderr, "Project saved as %s.\n", name );
fclose( store );
}
-
}
else
{
" [ -consumer id[:arg] [ name=value ]* ]\n"
" [ -filter id[:arg] [ name=value ] * ]\n"
" [ -transition id[:arg] [ name=value ] * ]\n"
- " [ -blank time ]\n"
+ " [ -blank frames ]\n"
" [ -track ]\n"
" [ producer [ name=value ] * ]+\n" );
}
result = mlt_factory_producer( "pixbuf", file );
else if ( strstr( file, ".png" ) )
result = mlt_factory_producer( "pixbuf", file );
+ else if ( strstr( file, ".tga" ) )
+ result = mlt_factory_producer( "pixbuf", file );
+ else if ( strstr( file, ".txt" ) )
+ result = mlt_factory_producer( "pango", file );
// 2nd Line fallbacks
if ( result == NULL && strstr( file, ".dv" ) )
{
clear_unit( unit );
miracle_log( LOG_DEBUG, "Cleaned playlist" );
+ miracle_unit_status_communicate( unit );
return valerie_ok;
}
return valerie_invalid_file;
}
-/** Start playing the clip.
-
- Start a dv-pump and commence dv1394 transmission.
+/** Start playing the unit.
\todo error handling
\param unit A miracle_unit handle.
mlt_properties properties = unit->properties;
mlt_playlist playlist = mlt_properties_get_data( properties, "playlist", NULL );
mlt_producer producer = mlt_playlist_producer( playlist );
+ mlt_consumer consumer = mlt_properties_get_data( unit->properties, "consumer", NULL );
mlt_producer_set_speed( producer, ( double )speed / 1000 );
+ mlt_consumer_start( consumer );
miracle_unit_status_communicate( unit );
}
void miracle_unit_terminate( miracle_unit unit )
{
+ mlt_consumer consumer = mlt_properties_get_data( unit->properties, "consumer", NULL );
+ mlt_consumer_stop( consumer );
+ miracle_unit_status_communicate( unit );
}
/** Query the status of unit playback.
int miracle_unit_has_terminated( miracle_unit unit )
{
- return 0;
+ mlt_consumer consumer = mlt_properties_get_data( unit->properties, "consumer", NULL );
+ return mlt_consumer_is_stopped( consumer );
}
/** Transfer the currently loaded clip to another unit
status->generation = mlt_properties_get_int( properties, "generation" );
- if ( !strcmp( status->clip, "" ) )
+ if ( miracle_unit_has_terminated( unit ) )
+ status->status = unit_stopped;
+ else if ( !strcmp( status->clip, "" ) )
status->status = unit_not_loaded;
else if ( status->speed == 0 )
status->status = unit_paused;
if ( unit == NULL )
return RESPONSE_INVALID_UNIT;
else
+ {
miracle_unit_play( unit, 0 );
+ miracle_unit_terminate( unit );
+ }
return RESPONSE_SUCCESS;
}
char *filename;
int width;
int height;
- double *bitmap;
+ float *bitmap;
}
transition_luma;
// image processing functions
-static inline double smoothstep( double edge1, double edge2, double a )
+static inline float smoothstep( float edge1, float edge2, float a )
{
if ( a < edge1 )
return 0.0;
\param field_order -1 = progressive, 0 = lower field first, 1 = top field first
*/
static void luma_composite( mlt_frame this, mlt_frame b_frame, int luma_width, int luma_height,
- double *luma_bitmap, double pos, double frame_delta, double softness, int field_order )
+ float *luma_bitmap, float pos, float frame_delta, float softness, int field_order )
{
int width_src, height_src;
int width_dest, height_dest;
int i, j;
int stride_src;
int stride_dest;
- double weight = 0;
+ float weight = 0;
int field;
format_src = mlt_image_yuv422;
stride_src = width_src * 2;
stride_dest = width_dest * 2;
+ // Offset the position based on which field we're looking at ...
+ float field_pos[ 2 ];
+ field_pos[ 0 ] = pos + ( ( field_order == 0 ? 1 : 0 ) * frame_delta * 0.5 );
+ field_pos[ 1 ] = pos + ( ( field_order == 0 ? 0 : 1 ) * frame_delta * 0.5 );
+
+ // adjust the position for the softness level
+ field_pos[ 0 ] *= ( 1.0 + softness );
+ field_pos[ 1 ] *= ( 1.0 + softness );
+
+ uint8_t *p;
+ uint8_t *q;
+ uint8_t *o;
+ float *l;
+
+ uint8_t y;
+ uint8_t uv;
+ float value;
+
// composite using luma map
for ( field = 0; field < ( field_order < 0 ? 1 : 2 ); ++field )
{
- // Offset the position based on which field we're looking at ...
- double field_pos = pos + ( ( field_order == 0 ? 1 - field : field) * frame_delta * 0.5 );
-
- // adjust the position for the softness level
- field_pos *= ( 1.0 + softness );
-
for ( i = field; i < height_src; i += ( field_order < 0 ? 1 : 2 ) )
{
- uint8_t *p = &p_src[ i * stride_src ];
- uint8_t *q = &p_dest[ i * stride_dest ];
- uint8_t *o = &p_dest[ i * stride_dest ];
- double *l = &luma_bitmap[ i * luma_width ];
+ p = &p_src[ i * stride_src ];
+ q = &p_dest[ i * stride_dest ];
+ o = &p_dest[ i * stride_dest ];
+ l = &luma_bitmap[ i * luma_width ];
for ( j = 0; j < width_src; j ++ )
{
- uint8_t y = *p ++;
- uint8_t uv = *p ++;
- weight = *l ++;
- double value = smoothstep( weight, weight + softness, field_pos );
+ y = *p ++;
+ uv = *p ++;
+ weight = *l ++;
+ value = smoothstep( weight, weight + softness, field_pos[ field ] );
*o ++ = (uint8_t)( y * value + *q++ * ( 1 - value ) );
*o ++ = (uint8_t)( uv * value + *q++ * ( 1 - value ) );
mlt_properties b_props = mlt_frame_properties( b_frame );
// Arbitrary composite defaults
- static double previous_mix = 0;
- double mix = 0;
- int luma_width = 0;
- int luma_height = 0;
- double *luma_bitmap = NULL;
- double luma_softness = 0;
- int progressive = 0;
- int top_field_first = 0;
-
- // mix is the offset time value in the duration of the transition
- // - also used as the mixing level for a dissolve
- if ( mlt_properties_get( b_props, "image.mix" ) != NULL )
- mix = mlt_properties_get_double( b_props, "image.mix" );
-
- // (mix - previous_mix) is the animation delta, if backwards reset previous
- if ( mix < previous_mix )
- previous_mix = 0;
-
- // Get the interlace and field properties of the frame
- if ( mlt_properties_get( b_props, "progressive" ) != NULL )
- progressive = mlt_properties_get_int( b_props, "progressive" );
- if ( mlt_properties_get( b_props, "top_field_first" ) != NULL )
- top_field_first = mlt_properties_get_int( b_props, "top_field_first" );
-
- // Get the luma map parameters
- if ( mlt_properties_get( b_props, "luma.width" ) != NULL )
- luma_width = mlt_properties_get_int( b_props, "luma.width" );
- if ( mlt_properties_get( b_props, "luma.height" ) != NULL )
- luma_height = mlt_properties_get_int( b_props, "luma.height" );
- if ( mlt_properties_get( b_props, "luma.softness" ) != NULL )
- luma_softness = mlt_properties_get_double( b_props, "luma.softness" );
- luma_bitmap = (double*) mlt_properties_get_data( b_props, "luma.bitmap", NULL );
+ float frame_delta = 1 / mlt_properties_get_double( b_props, "fps" );
+ float mix = mlt_properties_get_double( b_props, "image.mix" );
+ int luma_width = mlt_properties_get_int( b_props, "luma.width" );
+ int luma_height = mlt_properties_get_int( b_props, "luma.height" );
+ float *luma_bitmap = mlt_properties_get_data( b_props, "luma.bitmap", NULL );
+ float luma_softness = mlt_properties_get_double( b_props, "luma.softness" );
+ int progressive = mlt_properties_get_int( b_props, "progressive" );
+ int top_field_first = mlt_properties_get_int( b_props, "top_field_first" );
+ int reverse = mlt_properties_get_int( b_props, "luma.reverse" );
+
+ // Honour the reverse here
+ mix = reverse ? 1 - mix : mix;
if ( luma_width > 0 && luma_height > 0 && luma_bitmap != NULL )
// Composite the frames using a luma map
- luma_composite( this, b_frame, luma_width, luma_height, luma_bitmap, mix, mix - previous_mix,
+ luma_composite( this, b_frame, luma_width, luma_height, luma_bitmap, mix, frame_delta,
luma_softness, progressive > 0 ? -1 : top_field_first );
else
// Dissolve the frames using the time offset for mix value
*height = mlt_properties_get_int( a_props, "height" );
*image = mlt_properties_get_data( a_props, "image", NULL );
- previous_mix = mix;
-
return 0;
}
/** Load the luma map from PGM stream.
*/
-static void luma_read_pgm( FILE *f, double **map, int *width, int *height )
+static void luma_read_pgm( FILE *f, float **map, int *width, int *height )
{
uint8_t *data = NULL;
while (1)
int i = 2;
int maxval;
int bpp;
- double *p;
+ float *p;
line[127] = '\0';
break;
// allocate the luma bitmap
- *map = p = (double*) malloc( *width * *height * sizeof( double ) );
+ *map = p = (float*) malloc( *width * *height * sizeof( float ) );
if ( *map == NULL )
break;
for ( i = 0; i < *width * *height * bpp; i += bpp )
{
if ( bpp == 1 )
- *p++ = (double) data[ i ] / (double) maxval;
+ *p++ = (float) data[ i ] / (float) maxval;
else
- *p++ = (double) ( ( data[ i ] << 8 ) + data[ i+1 ] ) / (double) maxval;
+ *p++ = (float) ( ( data[ i ] << 8 ) + data[ i+1 ] ) / (float) maxval;
}
break;
mlt_position in = mlt_transition_get_in( transition );
mlt_position out = mlt_transition_get_out( transition );
mlt_position time = mlt_frame_get_position( b_frame );
- double pos = ( double )( time - in ) / ( double )( out - in + 1 );
+ float pos = ( float )( time - in ) / ( float )( out - in + 1 );
// Set the b frame properties
mlt_properties_set_double( b_props, "image.mix", pos );
mlt_properties_set_int( b_props, "luma.width", this->width );
mlt_properties_set_int( b_props, "luma.height", this->height );
mlt_properties_set_data( b_props, "luma.bitmap", this->bitmap, 0, NULL, NULL );
- if ( mlt_properties_get( properties, "softness" ) != NULL )
- mlt_properties_set_double( b_props, "luma.softness", mlt_properties_get_double( properties, "softness" ) );
+ mlt_properties_set_int( b_props, "luma.reverse", mlt_properties_get_int( properties, "reverse" ) );
+ mlt_properties_set_double( b_props, "luma.softness", mlt_properties_get_double( properties, "softness" ) );
mlt_frame_push_get_image( a_frame, transition_get_image );
mlt_frame_push_frame( a_frame, b_frame );
double mix = 0.5;
if ( mlt_properties_get( b_props, "audio.mix" ) != NULL )
mix = mlt_properties_get_double( b_props, "audio.mix" );
+ if ( mlt_properties_get_int( b_props, "audio.reverse" ) )
+ mix = 1 - mix;
+
mlt_frame_mix_audio( frame, b_frame, mix, buffer, format, frequency, channels, samples );
// Push the b_frame back on for get_image
}
else
mlt_properties_set_double( b_props, "audio.mix", mlt_properties_get_double( properties, "mix" ) );
+ mlt_properties_set_double( b_props, "audio.reverse", mlt_properties_get_double( properties, "reverse" ) );
}
// Backup the original get_audio (it's still needed)
/** Forward references to static functions.
*/
+static int consumer_start( mlt_consumer parent );
+static int consumer_stop( mlt_consumer parent );
+static int consumer_is_stopped( mlt_consumer parent );
static void consumer_close( mlt_consumer parent );
static void *consumer_thread( void * );
static int consumer_get_dimensions( int *width, int *height );
mlt_properties_set_double( this->properties, "volume", 1.0 );
// This is the initialisation of the consumer
- this->running = 1;
pthread_mutex_init( &this->audio_mutex, NULL );
pthread_cond_init( &this->audio_cond, NULL);
// Set the sdl flags
this->sdl_flags = SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_HWACCEL | SDL_RESIZABLE;
- // Create the the thread
- pthread_create( &this->thread, NULL, consumer_thread, this );
+ // Allow thread to be started/stopped
+ parent->start = consumer_start;
+ parent->stop = consumer_stop;
+ parent->is_stopped = consumer_is_stopped;
// Return the consumer produced
return parent;
return NULL;
}
+int consumer_start( mlt_consumer parent )
+{
+ consumer_sdl this = parent->child;
+
+ if ( !this->running )
+ {
+ this->running = 1;
+ pthread_create( &this->thread, NULL, consumer_thread, this );
+ }
+
+ return 0;
+}
+
+int consumer_stop( mlt_consumer parent )
+{
+ // Get the actual object
+ consumer_sdl this = parent->child;
+
+ if ( this->running )
+ {
+ // Kill the thread and clean up
+ this->running = 0;
+
+ pthread_mutex_lock( &this->audio_mutex );
+ pthread_cond_broadcast( &this->audio_cond );
+ pthread_mutex_unlock( &this->audio_mutex );
+
+ pthread_join( this->thread, NULL );
+ }
+
+ return 0;
+}
+
+int consumer_is_stopped( mlt_consumer parent )
+{
+ consumer_sdl this = parent->child;
+ return !this->running;
+}
+
static int sdl_lock_display( )
{
SDL_Surface *screen = SDL_GetVideoSurface( );
SDL_AudioSpec got;
SDL_EnableKeyRepeat( SDL_DEFAULT_REPEAT_DELAY, SDL_DEFAULT_REPEAT_INTERVAL );
+ SDL_EnableUNICODE( 1 );
// specify audio format
memset( &request, 0, sizeof( SDL_AudioSpec ) );
case SDL_KEYDOWN:
{
mlt_producer producer = mlt_properties_get_data( properties, "transport_producer", NULL );
+ char keyboard[ 2 ] = " ";
void (*callback)( mlt_producer, char * ) = mlt_properties_get_data( properties, "transport_callback", NULL );
- if ( callback != NULL && producer != NULL && strcmp( SDL_GetKeyName(event.key.keysym.sym), "space" ) )
- callback( producer, SDL_GetKeyName(event.key.keysym.sym) );
- else if ( callback != NULL && producer != NULL && !strcmp( SDL_GetKeyName(event.key.keysym.sym), "space" ) )
- callback( producer, " " );
+ if ( callback != NULL && producer != NULL && event.key.keysym.unicode < 0x80 && event.key.keysym.unicode > 0 )
+ {
+ keyboard[ 0 ] = ( char )event.key.keysym.unicode;
+ callback( producer, keyboard );
+ }
}
break;
}
SDL_FreeYUVOverlay( this->sdl_overlay );
SDL_Quit( );
+ this->sdl_screen = NULL;
+ this->sdl_overlay = NULL;
+ this->audio_avail = 0;
+
return NULL;
}
// Get the actual object
consumer_sdl this = parent->child;
- // Kill the thread and clean up
- this->running = 0;
-
- pthread_mutex_lock( &this->audio_mutex );
- pthread_cond_broadcast( &this->audio_cond );
- pthread_mutex_unlock( &this->audio_mutex );
+ // Stop the consumer
+ mlt_consumer_stop( parent );
- pthread_join( this->thread, NULL );
+ // Destroy mutexes
pthread_mutex_destroy( &this->audio_mutex );
pthread_cond_destroy( &this->audio_cond );
return mlt_service_connect_producer( &this->parent, producer, 0 );
}
+/** Start the consumer.
+*/
+
+int mlt_consumer_start( mlt_consumer this )
+{
+ if ( this->start != NULL )
+ return this->start( this );
+ return 0;
+}
+
+/** Stop the consumer.
+*/
+
+int mlt_consumer_stop( mlt_consumer this )
+{
+ if ( this->stop != NULL )
+ return this->stop( this );
+ return 0;
+}
+
+/** Determine if the consumer is stopped.
+*/
+
+int mlt_consumer_is_stopped( mlt_consumer this )
+{
+ if ( this->is_stopped != NULL )
+ return this->is_stopped( this );
+ return 0;
+}
+
/** Close the consumer.
*/
struct mlt_service_s parent;
// public virtual
+ int ( *start )( mlt_consumer );
+ int ( *stop )( mlt_consumer );
+ int ( *is_stopped )( mlt_consumer );
void ( *close )( mlt_consumer );
// Private data
extern mlt_service mlt_consumer_service( mlt_consumer this );
extern mlt_properties mlt_consumer_properties( mlt_consumer this );
extern int mlt_consumer_connect( mlt_consumer this, mlt_service producer );
+extern int mlt_consumer_start( mlt_consumer this );
+extern int mlt_consumer_stop( mlt_consumer this );
+extern int mlt_consumer_is_stopped( mlt_consumer this );
extern void mlt_consumer_close( mlt_consumer );
#endif
if ( p_alpha )
p_alpha += x_src + y_src * stride_src / 2;
+ uint8_t *p = p_src;
+ uint8_t *q = p_dest;
+ uint8_t *o = p_dest;
+ uint8_t *z = p_alpha;
+
+ uint8_t Y;
+ uint8_t UV;
+ uint8_t a;
+ float value;
+
// now do the compositing only to cropped extents
for ( i = 0; i < height_src; i++ )
{
- uint8_t *p = p_src;
- uint8_t *q = p_dest;
- uint8_t *o = p_dest;
- uint8_t *z = p_alpha;
+ p = p_src;
+ q = p_dest;
+ o = p_dest;
+ z = p_alpha;
for ( j = 0; j < width_src; j ++ )
{
- uint8_t y = *p ++;
- uint8_t uv = *p ++;
- uint8_t a = ( z == NULL ) ? 255 : *z ++;
- float value = ( weight * ( float ) a / 255.0 );
- *o ++ = (uint8_t)( y * value + *q++ * ( 1 - value ) );
- *o ++ = (uint8_t)( uv * value + *q++ * ( 1 - value ) );
+ Y = *p ++;
+ UV = *p ++;
+ a = ( z == NULL ) ? 255 : *z ++;
+ value = ( weight * ( float ) a / 255.0 );
+ *o ++ = (uint8_t)( Y * value + *q++ * ( 1 - value ) );
+ *o ++ = (uint8_t)( UV * value + *q++ * ( 1 - value ) );
}
p_src += stride_src;
#include <stdio.h>
#include <stdlib.h>
+#include <string.h>
/** Private structure.
*/
last = time;
fprintf( stderr, "%d: %lld\n", i, time );
}
+ fprintf( stderr, "Current Position: %lld\n", mlt_producer_position( producer ) );
}
break;
mlt_producer_seek( producer, time );
}
break;
+ case 'H':
+ if ( producer != NULL )
+ {
+ mlt_position position = mlt_producer_position( producer );
+ mlt_producer_seek( producer, position - ( mlt_producer_get_fps( producer ) * 60 ) );
+ }
+ break;
case 'h':
- if ( multitrack != NULL )
+ if ( producer != NULL )
{
mlt_position position = mlt_producer_position( producer );
mlt_producer_set_speed( producer, 0 );
- mlt_producer_seek( producer, position - 1 >= 0 ? position - 1 : 0 );
+ mlt_producer_seek( producer, position - 1 );
}
break;
case 'j':
}
break;
case 'l':
- if ( multitrack != NULL )
+ if ( producer != NULL )
{
mlt_position position = mlt_producer_position( producer );
mlt_producer_set_speed( producer, 0 );
mlt_producer_seek( producer, position + 1 );
}
break;
+ case 'L':
+ if ( producer != NULL )
+ {
+ mlt_position position = mlt_producer_position( producer );
+ mlt_producer_seek( producer, position + ( mlt_producer_get_fps( producer ) * 60 ) );
+ }
+ break;
}
}
}
fprintf( stderr, "+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+\n" );
fprintf( stderr, "+---------------------------------------------------------------------+\n" );
- fprintf( stderr, "| h = previous, l = next |\n" );
+ fprintf( stderr, "| H = back 1 minute, L = forward 1 minute |\n" );
+ fprintf( stderr, "| h = previous frame, l = next frame |\n" );
fprintf( stderr, "| g = start of clip, j = next clip, k = previous clip |\n" );
fprintf( stderr, "| 0 = restart, q = quit, space = play |\n" );
fprintf( stderr, "+---------------------------------------------------------------------+\n" );
// Connect consumer to tractor
mlt_consumer_connect( consumer, mlt_field_service( field ) );
+ // Start the consumer
+ mlt_consumer_start( consumer );
+
// Transport functionality
transport( inigo );
+
+ // Stop the consumer
+ mlt_consumer_stop( consumer );
}
else if ( store != NULL )
{
fprintf( stderr, "Project saved as %s.\n", name );
fclose( store );
}
-
}
else
{
" [ -consumer id[:arg] [ name=value ]* ]\n"
" [ -filter id[:arg] [ name=value ] * ]\n"
" [ -transition id[:arg] [ name=value ] * ]\n"
- " [ -blank time ]\n"
+ " [ -blank frames ]\n"
" [ -track ]\n"
" [ producer [ name=value ] * ]+\n" );
}
result = mlt_factory_producer( "pixbuf", file );
else if ( strstr( file, ".png" ) )
result = mlt_factory_producer( "pixbuf", file );
+ else if ( strstr( file, ".tga" ) )
+ result = mlt_factory_producer( "pixbuf", file );
+ else if ( strstr( file, ".txt" ) )
+ result = mlt_factory_producer( "pango", file );
// 2nd Line fallbacks
if ( result == NULL && strstr( file, ".dv" ) )
{
clear_unit( unit );
miracle_log( LOG_DEBUG, "Cleaned playlist" );
+ miracle_unit_status_communicate( unit );
return valerie_ok;
}
return valerie_invalid_file;
}
-/** Start playing the clip.
-
- Start a dv-pump and commence dv1394 transmission.
+/** Start playing the unit.
\todo error handling
\param unit A miracle_unit handle.
mlt_properties properties = unit->properties;
mlt_playlist playlist = mlt_properties_get_data( properties, "playlist", NULL );
mlt_producer producer = mlt_playlist_producer( playlist );
+ mlt_consumer consumer = mlt_properties_get_data( unit->properties, "consumer", NULL );
mlt_producer_set_speed( producer, ( double )speed / 1000 );
+ mlt_consumer_start( consumer );
miracle_unit_status_communicate( unit );
}
void miracle_unit_terminate( miracle_unit unit )
{
+ mlt_consumer consumer = mlt_properties_get_data( unit->properties, "consumer", NULL );
+ mlt_consumer_stop( consumer );
+ miracle_unit_status_communicate( unit );
}
/** Query the status of unit playback.
int miracle_unit_has_terminated( miracle_unit unit )
{
- return 0;
+ mlt_consumer consumer = mlt_properties_get_data( unit->properties, "consumer", NULL );
+ return mlt_consumer_is_stopped( consumer );
}
/** Transfer the currently loaded clip to another unit
status->generation = mlt_properties_get_int( properties, "generation" );
- if ( !strcmp( status->clip, "" ) )
+ if ( miracle_unit_has_terminated( unit ) )
+ status->status = unit_stopped;
+ else if ( !strcmp( status->clip, "" ) )
status->status = unit_not_loaded;
else if ( status->speed == 0 )
status->status = unit_paused;
if ( unit == NULL )
return RESPONSE_INVALID_UNIT;
else
+ {
miracle_unit_play( unit, 0 );
+ miracle_unit_terminate( unit );
+ }
return RESPONSE_SUCCESS;
}
char *filename;
int width;
int height;
- double *bitmap;
+ float *bitmap;
}
transition_luma;
// image processing functions
-static inline double smoothstep( double edge1, double edge2, double a )
+static inline float smoothstep( float edge1, float edge2, float a )
{
if ( a < edge1 )
return 0.0;
\param field_order -1 = progressive, 0 = lower field first, 1 = top field first
*/
static void luma_composite( mlt_frame this, mlt_frame b_frame, int luma_width, int luma_height,
- double *luma_bitmap, double pos, double frame_delta, double softness, int field_order )
+ float *luma_bitmap, float pos, float frame_delta, float softness, int field_order )
{
int width_src, height_src;
int width_dest, height_dest;
int i, j;
int stride_src;
int stride_dest;
- double weight = 0;
+ float weight = 0;
int field;
format_src = mlt_image_yuv422;
stride_src = width_src * 2;
stride_dest = width_dest * 2;
+ // Offset the position based on which field we're looking at ...
+ float field_pos[ 2 ];
+ field_pos[ 0 ] = pos + ( ( field_order == 0 ? 1 : 0 ) * frame_delta * 0.5 );
+ field_pos[ 1 ] = pos + ( ( field_order == 0 ? 0 : 1 ) * frame_delta * 0.5 );
+
+ // adjust the position for the softness level
+ field_pos[ 0 ] *= ( 1.0 + softness );
+ field_pos[ 1 ] *= ( 1.0 + softness );
+
+ uint8_t *p;
+ uint8_t *q;
+ uint8_t *o;
+ float *l;
+
+ uint8_t y;
+ uint8_t uv;
+ float value;
+
// composite using luma map
for ( field = 0; field < ( field_order < 0 ? 1 : 2 ); ++field )
{
- // Offset the position based on which field we're looking at ...
- double field_pos = pos + ( ( field_order == 0 ? 1 - field : field) * frame_delta * 0.5 );
-
- // adjust the position for the softness level
- field_pos *= ( 1.0 + softness );
-
for ( i = field; i < height_src; i += ( field_order < 0 ? 1 : 2 ) )
{
- uint8_t *p = &p_src[ i * stride_src ];
- uint8_t *q = &p_dest[ i * stride_dest ];
- uint8_t *o = &p_dest[ i * stride_dest ];
- double *l = &luma_bitmap[ i * luma_width ];
+ p = &p_src[ i * stride_src ];
+ q = &p_dest[ i * stride_dest ];
+ o = &p_dest[ i * stride_dest ];
+ l = &luma_bitmap[ i * luma_width ];
for ( j = 0; j < width_src; j ++ )
{
- uint8_t y = *p ++;
- uint8_t uv = *p ++;
- weight = *l ++;
- double value = smoothstep( weight, weight + softness, field_pos );
+ y = *p ++;
+ uv = *p ++;
+ weight = *l ++;
+ value = smoothstep( weight, weight + softness, field_pos[ field ] );
*o ++ = (uint8_t)( y * value + *q++ * ( 1 - value ) );
*o ++ = (uint8_t)( uv * value + *q++ * ( 1 - value ) );
mlt_properties b_props = mlt_frame_properties( b_frame );
// Arbitrary composite defaults
- static double previous_mix = 0;
- double mix = 0;
- int luma_width = 0;
- int luma_height = 0;
- double *luma_bitmap = NULL;
- double luma_softness = 0;
- int progressive = 0;
- int top_field_first = 0;
-
- // mix is the offset time value in the duration of the transition
- // - also used as the mixing level for a dissolve
- if ( mlt_properties_get( b_props, "image.mix" ) != NULL )
- mix = mlt_properties_get_double( b_props, "image.mix" );
-
- // (mix - previous_mix) is the animation delta, if backwards reset previous
- if ( mix < previous_mix )
- previous_mix = 0;
-
- // Get the interlace and field properties of the frame
- if ( mlt_properties_get( b_props, "progressive" ) != NULL )
- progressive = mlt_properties_get_int( b_props, "progressive" );
- if ( mlt_properties_get( b_props, "top_field_first" ) != NULL )
- top_field_first = mlt_properties_get_int( b_props, "top_field_first" );
-
- // Get the luma map parameters
- if ( mlt_properties_get( b_props, "luma.width" ) != NULL )
- luma_width = mlt_properties_get_int( b_props, "luma.width" );
- if ( mlt_properties_get( b_props, "luma.height" ) != NULL )
- luma_height = mlt_properties_get_int( b_props, "luma.height" );
- if ( mlt_properties_get( b_props, "luma.softness" ) != NULL )
- luma_softness = mlt_properties_get_double( b_props, "luma.softness" );
- luma_bitmap = (double*) mlt_properties_get_data( b_props, "luma.bitmap", NULL );
+ float frame_delta = 1 / mlt_properties_get_double( b_props, "fps" );
+ float mix = mlt_properties_get_double( b_props, "image.mix" );
+ int luma_width = mlt_properties_get_int( b_props, "luma.width" );
+ int luma_height = mlt_properties_get_int( b_props, "luma.height" );
+ float *luma_bitmap = mlt_properties_get_data( b_props, "luma.bitmap", NULL );
+ float luma_softness = mlt_properties_get_double( b_props, "luma.softness" );
+ int progressive = mlt_properties_get_int( b_props, "progressive" );
+ int top_field_first = mlt_properties_get_int( b_props, "top_field_first" );
+ int reverse = mlt_properties_get_int( b_props, "luma.reverse" );
+
+ // Honour the reverse here
+ mix = reverse ? 1 - mix : mix;
if ( luma_width > 0 && luma_height > 0 && luma_bitmap != NULL )
// Composite the frames using a luma map
- luma_composite( this, b_frame, luma_width, luma_height, luma_bitmap, mix, mix - previous_mix,
+ luma_composite( this, b_frame, luma_width, luma_height, luma_bitmap, mix, frame_delta,
luma_softness, progressive > 0 ? -1 : top_field_first );
else
// Dissolve the frames using the time offset for mix value
*height = mlt_properties_get_int( a_props, "height" );
*image = mlt_properties_get_data( a_props, "image", NULL );
- previous_mix = mix;
-
return 0;
}
/** Load the luma map from PGM stream.
*/
-static void luma_read_pgm( FILE *f, double **map, int *width, int *height )
+static void luma_read_pgm( FILE *f, float **map, int *width, int *height )
{
uint8_t *data = NULL;
while (1)
int i = 2;
int maxval;
int bpp;
- double *p;
+ float *p;
line[127] = '\0';
break;
// allocate the luma bitmap
- *map = p = (double*) malloc( *width * *height * sizeof( double ) );
+ *map = p = (float*) malloc( *width * *height * sizeof( float ) );
if ( *map == NULL )
break;
for ( i = 0; i < *width * *height * bpp; i += bpp )
{
if ( bpp == 1 )
- *p++ = (double) data[ i ] / (double) maxval;
+ *p++ = (float) data[ i ] / (float) maxval;
else
- *p++ = (double) ( ( data[ i ] << 8 ) + data[ i+1 ] ) / (double) maxval;
+ *p++ = (float) ( ( data[ i ] << 8 ) + data[ i+1 ] ) / (float) maxval;
}
break;
mlt_position in = mlt_transition_get_in( transition );
mlt_position out = mlt_transition_get_out( transition );
mlt_position time = mlt_frame_get_position( b_frame );
- double pos = ( double )( time - in ) / ( double )( out - in + 1 );
+ float pos = ( float )( time - in ) / ( float )( out - in + 1 );
// Set the b frame properties
mlt_properties_set_double( b_props, "image.mix", pos );
mlt_properties_set_int( b_props, "luma.width", this->width );
mlt_properties_set_int( b_props, "luma.height", this->height );
mlt_properties_set_data( b_props, "luma.bitmap", this->bitmap, 0, NULL, NULL );
- if ( mlt_properties_get( properties, "softness" ) != NULL )
- mlt_properties_set_double( b_props, "luma.softness", mlt_properties_get_double( properties, "softness" ) );
+ mlt_properties_set_int( b_props, "luma.reverse", mlt_properties_get_int( properties, "reverse" ) );
+ mlt_properties_set_double( b_props, "luma.softness", mlt_properties_get_double( properties, "softness" ) );
mlt_frame_push_get_image( a_frame, transition_get_image );
mlt_frame_push_frame( a_frame, b_frame );
double mix = 0.5;
if ( mlt_properties_get( b_props, "audio.mix" ) != NULL )
mix = mlt_properties_get_double( b_props, "audio.mix" );
+ if ( mlt_properties_get_int( b_props, "audio.reverse" ) )
+ mix = 1 - mix;
+
mlt_frame_mix_audio( frame, b_frame, mix, buffer, format, frequency, channels, samples );
// Push the b_frame back on for get_image
}
else
mlt_properties_set_double( b_props, "audio.mix", mlt_properties_get_double( properties, "mix" ) );
+ mlt_properties_set_double( b_props, "audio.reverse", mlt_properties_get_double( properties, "reverse" ) );
}
// Backup the original get_audio (it's still needed)
/** Forward references to static functions.
*/
+static int consumer_start( mlt_consumer parent );
+static int consumer_stop( mlt_consumer parent );
+static int consumer_is_stopped( mlt_consumer parent );
static void consumer_close( mlt_consumer parent );
static void *consumer_thread( void * );
static int consumer_get_dimensions( int *width, int *height );
mlt_properties_set_double( this->properties, "volume", 1.0 );
// This is the initialisation of the consumer
- this->running = 1;
pthread_mutex_init( &this->audio_mutex, NULL );
pthread_cond_init( &this->audio_cond, NULL);
// Set the sdl flags
this->sdl_flags = SDL_HWSURFACE | SDL_DOUBLEBUF | SDL_HWACCEL | SDL_RESIZABLE;
- // Create the the thread
- pthread_create( &this->thread, NULL, consumer_thread, this );
+ // Allow thread to be started/stopped
+ parent->start = consumer_start;
+ parent->stop = consumer_stop;
+ parent->is_stopped = consumer_is_stopped;
// Return the consumer produced
return parent;
return NULL;
}
+int consumer_start( mlt_consumer parent )
+{
+ consumer_sdl this = parent->child;
+
+ if ( !this->running )
+ {
+ this->running = 1;
+ pthread_create( &this->thread, NULL, consumer_thread, this );
+ }
+
+ return 0;
+}
+
+int consumer_stop( mlt_consumer parent )
+{
+ // Get the actual object
+ consumer_sdl this = parent->child;
+
+ if ( this->running )
+ {
+ // Kill the thread and clean up
+ this->running = 0;
+
+ pthread_mutex_lock( &this->audio_mutex );
+ pthread_cond_broadcast( &this->audio_cond );
+ pthread_mutex_unlock( &this->audio_mutex );
+
+ pthread_join( this->thread, NULL );
+ }
+
+ return 0;
+}
+
+int consumer_is_stopped( mlt_consumer parent )
+{
+ consumer_sdl this = parent->child;
+ return !this->running;
+}
+
static int sdl_lock_display( )
{
SDL_Surface *screen = SDL_GetVideoSurface( );
SDL_AudioSpec got;
SDL_EnableKeyRepeat( SDL_DEFAULT_REPEAT_DELAY, SDL_DEFAULT_REPEAT_INTERVAL );
+ SDL_EnableUNICODE( 1 );
// specify audio format
memset( &request, 0, sizeof( SDL_AudioSpec ) );
case SDL_KEYDOWN:
{
mlt_producer producer = mlt_properties_get_data( properties, "transport_producer", NULL );
+ char keyboard[ 2 ] = " ";
void (*callback)( mlt_producer, char * ) = mlt_properties_get_data( properties, "transport_callback", NULL );
- if ( callback != NULL && producer != NULL && strcmp( SDL_GetKeyName(event.key.keysym.sym), "space" ) )
- callback( producer, SDL_GetKeyName(event.key.keysym.sym) );
- else if ( callback != NULL && producer != NULL && !strcmp( SDL_GetKeyName(event.key.keysym.sym), "space" ) )
- callback( producer, " " );
+ if ( callback != NULL && producer != NULL && event.key.keysym.unicode < 0x80 && event.key.keysym.unicode > 0 )
+ {
+ keyboard[ 0 ] = ( char )event.key.keysym.unicode;
+ callback( producer, keyboard );
+ }
}
break;
}
SDL_FreeYUVOverlay( this->sdl_overlay );
SDL_Quit( );
+ this->sdl_screen = NULL;
+ this->sdl_overlay = NULL;
+ this->audio_avail = 0;
+
return NULL;
}
// Get the actual object
consumer_sdl this = parent->child;
- // Kill the thread and clean up
- this->running = 0;
-
- pthread_mutex_lock( &this->audio_mutex );
- pthread_cond_broadcast( &this->audio_cond );
- pthread_mutex_unlock( &this->audio_mutex );
+ // Stop the consumer
+ mlt_consumer_stop( parent );
- pthread_join( this->thread, NULL );
+ // Destroy mutexes
pthread_mutex_destroy( &this->audio_mutex );
pthread_cond_destroy( &this->audio_cond );