X-Git-Url: https://git.sesse.net/?p=movit;a=blobdiff_plain;f=README;h=8e47db753733cbe0610259b6d640ce30f34e512a;hp=c6b20da79eafdfaaef911a5e53fb77f9a089b7dd;hb=refs%2Fheads%2F1.3.x-release;hpb=7dca345804c7851ff09edcc7198c5def94b9c4b1 diff --git a/README b/README index c6b20da..8e47db7 100644 --- a/README +++ b/README @@ -9,7 +9,7 @@ Movit is the Modern Video Toolkit, notwithstanding that anything that's called “modern” usually isn't, and it's really not a toolkit. Movit aims to be a _high-quality_, _high-performance_, _open-source_ -library for video filters. It is currently in alpha stage. +library for video filters. TL;DR, please give me download link and system demands @@ -21,12 +21,14 @@ OK, you need works fine on Linux and OS X, and Movit is not very POSIX-bound.) * GNU Make. * A GPU capable of running GLSL fragment shaders, - process floating-point textures, and a few other things. If your machine - is less than five years old _and you have the appropriate drivers_, - you're home free. -* The [Eigen 3] and [Google Test] libraries. (The library itself - depends only on the former, but you probably want to run the unit tests.) -* The [GLEW] library, for dealing with OpenGL extensions on various + processing floating-point textures, and a few other things (all are + part of OpenGL 3.0 or newer, although most OpenGL 2.0 cards also + have what's needed through extensions). If your machine is less than five + years old _and you have the appropriate drivers_, you're home free. + GLES3 (for mobile devices) will also work. +* The [Eigen 3], [FFTW3] and [Google Test] libraries. (The library itself + does not depend on the latter, but you probably want to run the unit tests.) +* The [epoxy] library, for dealing with OpenGL extensions on various platforms. Movit has been tested with Intel GPUs with the Mesa drivers @@ -39,14 +41,14 @@ for performance estimates. Still TL;DR, please give me the list of filters =============================================== -Blur, diffusion, glow, lift/gamma/gain (color correction), mirror, -mix (add two inputs), luma mix (use a map to wipe between two inputs), -overlay (the Porter-Duff “over” operation), scale (bilinear and Lanczos), -sharpen (both by unsharp mask and by Wiener filters), saturation -(or desaturation), vignette, and white balance. +Blur, diffusion, FFT-based convolution, glow, lift/gamma/gain (color +correction), mirror, mix (add two inputs), luma mix (use a map to wipe between +two inputs), overlay (the Porter-Duff “over” operation), scale (bilinear and +Lanczos), sharpen (both by unsharp mask and by Wiener filters), saturation +(or desaturation), vignette, white balance, and a deinterlacer (YADIF). Yes, that's a short list. But they all look great, are fast and don't give -you any nasty surprises. (I'd love to include denoise, deinterlace and +you any nasty surprises. (I'd love to include denoise and framerate up-/downconversion to the list, but doing them well are all research-grade problems, and Movit is currently not there.) @@ -54,7 +56,8 @@ all research-grade problems, and Movit is currently not there.) TL;DR, but I am interested in a programming example instead =========================================================== -Assuming you have an OpenGL context already set up: +Assuming you have an OpenGL context already set up (either a classic OpenGL +context, a GL 3.x forward-compatible or core context, or a GLES3 context): using namespace movit; @@ -90,16 +93,17 @@ OK, I can read a bit. What do you mean by “modern”? Backwards compatibility is fine and all, but sometimes we can do better by observing that the world has moved on. In particular: -* It's 2014, so people want to edit HD video. -* It's 2014, so everybody has a GPU. -* It's 2014, so everybody has a working C++ compiler. +* It's 2016, so people want to edit HD video. +* It's 2016, so everybody has a GPU. +* It's 2016, so everybody has a working C++ compiler. (Even Microsoft fixed theirs around 2003!) -While from a programming standpoint I'd love to say that it's 2014 +While from a programming standpoint I'd love to say that it's 2016 and interlacing does no longer exist, but that's not true (and interlacing, hated as it might be, is actually a useful and underrated technique for -bandwidth reduction in broadcast video). Movit will eventually provide -limited support for working with interlaced video, but currently does not. +bandwidth reduction in broadcast video). Movit may eventually provide +limited support for working with interlaced video; it has a deinterlacer, +but cannot currently process video in interlaced form. What do you mean by “high-performance”? @@ -124,9 +128,9 @@ decoding. Exactly what speeds you can expect is of course highly dependent on your GPU and the exact filter chain you are running. As a rule of thumb, you can run a reasonable filter chain (a lift/gamma/gain operation, -a bit of diffusion, maybe a vignette) at 720p in around 30 fps on a two-year-old +a bit of diffusion, maybe a vignette) at 720p in around 30 fps on a four-year-old Intel laptop. If you have a somewhat newer Intel card, you can do 1080p -video without much problems. And on a mid-range nVidia card of today +video without much problems. And on a low-range nVidia card of today (GTX 550 Ti), you can probably process 4K movies directly.