X-Git-Url: https://git.sesse.net/?p=movit;a=blobdiff_plain;f=README;h=8e47db753733cbe0610259b6d640ce30f34e512a;hp=81bb4c99e2f8428cca5b417c6fd7801c17a57103;hb=refs%2Fheads%2F1.3.x-release;hpb=8c7e53028a3ef4805d2608643041a5d7e6bd1b6e diff --git a/README b/README index 81bb4c9..8e47db7 100644 --- a/README +++ b/README @@ -9,7 +9,7 @@ Movit is the Modern Video Toolkit, notwithstanding that anything that's called “modern” usually isn't, and it's really not a toolkit. Movit aims to be a _high-quality_, _high-performance_, _open-source_ -library for video filters. It is currently in alpha stage. +library for video filters. TL;DR, please give me download link and system demands @@ -25,8 +25,9 @@ OK, you need part of OpenGL 3.0 or newer, although most OpenGL 2.0 cards also have what's needed through extensions). If your machine is less than five years old _and you have the appropriate drivers_, you're home free. -* The [Eigen 3] and [Google Test] libraries. (The library itself - depends only on the former, but you probably want to run the unit tests.) + GLES3 (for mobile devices) will also work. +* The [Eigen 3], [FFTW3] and [Google Test] libraries. (The library itself + does not depend on the latter, but you probably want to run the unit tests.) * The [epoxy] library, for dealing with OpenGL extensions on various platforms. @@ -40,13 +41,14 @@ for performance estimates. Still TL;DR, please give me the list of filters =============================================== -Blur, diffusion, glow, lift/gamma/gain (color correction), mirror, -mix (add two inputs), overlay (the Porter-Duff “over” operation), -scale (bilinear and Lanczos), sharpen (both by unsharp mask and by -Wiener filters), saturation (or desaturation), vignette, and white balance. +Blur, diffusion, FFT-based convolution, glow, lift/gamma/gain (color +correction), mirror, mix (add two inputs), luma mix (use a map to wipe between +two inputs), overlay (the Porter-Duff “over” operation), scale (bilinear and +Lanczos), sharpen (both by unsharp mask and by Wiener filters), saturation +(or desaturation), vignette, white balance, and a deinterlacer (YADIF). Yes, that's a short list. But they all look great, are fast and don't give -you any nasty surprises. (I'd love to include denoise, deinterlace and +you any nasty surprises. (I'd love to include denoise and framerate up-/downconversion to the list, but doing them well are all research-grade problems, and Movit is currently not there.) @@ -54,8 +56,8 @@ all research-grade problems, and Movit is currently not there.) TL;DR, but I am interested in a programming example instead =========================================================== -Assuming you have an OpenGL context already set up (currently you need -a classic OpenGL context; a GL 3.2+ core context won't do): +Assuming you have an OpenGL context already set up (either a classic OpenGL +context, a GL 3.x forward-compatible or core context, or a GLES3 context): using namespace movit; @@ -91,16 +93,17 @@ OK, I can read a bit. What do you mean by “modern”? Backwards compatibility is fine and all, but sometimes we can do better by observing that the world has moved on. In particular: -* It's 2014, so people want to edit HD video. -* It's 2014, so everybody has a GPU. -* It's 2014, so everybody has a working C++ compiler. +* It's 2016, so people want to edit HD video. +* It's 2016, so everybody has a GPU. +* It's 2016, so everybody has a working C++ compiler. (Even Microsoft fixed theirs around 2003!) -While from a programming standpoint I'd love to say that it's 2014 +While from a programming standpoint I'd love to say that it's 2016 and interlacing does no longer exist, but that's not true (and interlacing, hated as it might be, is actually a useful and underrated technique for -bandwidth reduction in broadcast video). Movit will eventually provide -limited support for working with interlaced video, but currently does not. +bandwidth reduction in broadcast video). Movit may eventually provide +limited support for working with interlaced video; it has a deinterlacer, +but cannot currently process video in interlaced form. What do you mean by “high-performance”? @@ -125,9 +128,9 @@ decoding. Exactly what speeds you can expect is of course highly dependent on your GPU and the exact filter chain you are running. As a rule of thumb, you can run a reasonable filter chain (a lift/gamma/gain operation, -a bit of diffusion, maybe a vignette) at 720p in around 30 fps on a two-year-old +a bit of diffusion, maybe a vignette) at 720p in around 30 fps on a four-year-old Intel laptop. If you have a somewhat newer Intel card, you can do 1080p -video without much problems. And on a mid-range nVidia card of today +video without much problems. And on a low-range nVidia card of today (GTX 550 Ti), you can probably process 4K movies directly.