- Software written in C++ and OpenGL
- Software captured as video
- Resolution: 1920x1080
- Encoding: 30fps intra-frame
- Length: 45 minutes
An input RGB noise image is sent between two different shader processors. One of the shaders is a blurring filter and the other one a sharpening filter with a variable kernel definition. The video was rendered in real-time and the evolution of the image is a recording of a one-hour long performance.
By using on the one hand a sharpening filter and on the other hand its diametric opposite, the blur filter, one could regard this work as an attempt to find the essential look or the inherent material visual properties of the graphic card itself.
This project is an implementation of a convolution feedback loop between two different shaders as image filters. The implementation references Adam Ferriss’ contribution on Shadertoy (2016) and I have tried to recreate some of the effects that artist Nenad Popov uses in his performances (2013). I managed to get similar looks by implementing different kernels in the sharpening shader, especially the emboss kernel.
The program works as follows. An input texture is rendered to a framebuffer
inputBuffer. The input texture is empty and procedural noise based on inigo quilez Voronoise shader program is applied. The texture bound to
inputBuffer is read in
framebufferA if the pingpong boolean is set to false (its default value). The texture read in frameBufferA is processed with
bufferShaderA, which is a kernel-based image sharpening filter with a choice of seven kernels. The texture rendered in
framebufferA is read by
framebufferB and processed with another shader bufferShaderB. This shader is a blur filter with variable blur size. After this, the default framebuffer is retrieved and the texture bound to
framebufferA is rendered to the display.
If the pingpong boolean is set to true,
inputBuffer is not read by
framebufferA anymore. Rather, the texture bound to
framebufferB is fed back into
framebufferA. This creates the feedback loop of blur and sharpen. The result can already be quite interesting. However, it is first when some changes are made to the geometry and colors in one of the shaders, that really interesting patterns can appear.
There is a possibility to mix in grayscale noise into the sharpening shader, as well as zooming the coordinate system. Furthermore, the kernels can be changed between sharpen, sobel (in four directions), emboss and outline. These are defined in this order by the
u_kernel_type uniform that is passed to
bufferShaderA. The kernels are modified from those described by Victor Powell (2015).
- Ferriss, Adam (2016). Convolution Feedback. Shader. https://www.shadertoy.com/view/MtdXW4
- Popov, Nenad (2013 onwards). Sonata for Convolution and Feedback. Live Video Performance. https://vimeo.com/19906756
- Powell, Victor (2015). Image Kernels Explained Visually. Website. http://setosa.io/ev/image-kernels/
- Quilez, Inigo (2014). Voronoise. Shader. https://www.shadertoy.com/view/Xd23Dh