音視頻開發(fā)之- 高斯模糊實(shí)現(xiàn)與優(yōu)化
目錄
高斯模糊的原理
GPUImage模糊的實(shí)現(xiàn)分析
高斯模糊優(yōu)化
資料
收獲
我們在平時(shí)開發(fā)中模糊是非常常用的技能,在android中有java的開源方案,也有RenderScript方案,今天我們來學(xué)習(xí)實(shí)踐通過OpenGL如何實(shí)現(xiàn)高斯模糊。
在工作中用到的高斯模糊,也只是做到基本的簡單實(shí)用,至于為什么能實(shí)現(xiàn)以及是否可以性能優(yōu)化點(diǎn)提升速度降低內(nèi)存,之前都欠考慮。
通過這篇我們來學(xué)習(xí)高斯模糊的原理、實(shí)現(xiàn)以及優(yōu)化,我們的旅程開啟。
一、高斯模糊的原理
這一小節(jié)會涉及到一些數(shù)學(xué)中基本概念,正態(tài)分布、高斯函數(shù)、卷積、模糊半徑等,通過下面的學(xué)習(xí)實(shí)踐我們對其進(jìn)行回顧學(xué)習(xí)。
"模糊",可以理解成每一個(gè)像素都取周邊像素的平均值,模糊分類有很多種,我們來看下均值模糊和高斯模糊。
均值模糊是每個(gè)像素的值都取周邊元素的平均值,并且周邊沒有點(diǎn)不管距離當(dāng)前點(diǎn)的距離遠(yuǎn)近,權(quán)重相同

圖片截圖來自:GAMES101-現(xiàn)代計(jì)算機(jī)圖形學(xué)入門-閆令琪
均值模糊可以實(shí)現(xiàn)模糊效果,但是如果模糊后的效果看起來和原圖效果更相近,就要考慮權(quán)重的問題,即距離越近的點(diǎn)權(quán)重越大,距離越遠(yuǎn)的點(diǎn)權(quán)重越小。
正態(tài)分布是一種權(quán)重分配模式,越接近中心,取值越大,越遠(yuǎn)離中心,取值越小。

圖片來自:高斯模糊的算法
圖片是二維的,對應(yīng)的是二維正態(tài)分布,正態(tài)分布的密度函數(shù)叫做"高斯函數(shù)"(Gaussian function)

圖片來自:Android圖像處理 - 高斯模糊的原理及實(shí)現(xiàn) ,函數(shù)中的σ是x的方差
有了高斯函數(shù),我們就可以計(jì)算每個(gè)點(diǎn)的權(quán)重。
假設(shè)模糊半徑是1,構(gòu)建一個(gè)3x3的矩陣,假設(shè)高斯函數(shù)的σ為1.5,根據(jù)xy的坐標(biāo)值計(jì)算每一個(gè)點(diǎn)的權(quán)重值,然后所有點(diǎn)權(quán)重值相加應(yīng)該為1,所以對上述計(jì)算后的值進(jìn)行歸一化處理。
有了歸一化的權(quán)重矩陣,把其作為卷積核,與原有圖片進(jìn)行卷積運(yùn)算,得出模糊后的值。

image
高斯模糊 是一個(gè)低通濾波,過濾掉高頻信號,剩下低頻信號,圖像內(nèi)容的邊界去掉 ,實(shí)現(xiàn)blur
二、GPUImage高斯模糊的實(shí)現(xiàn)分析
了解了高斯模糊的原理,這一小節(jié)我們看下如何實(shí)現(xiàn)高斯模糊,GPUImage是一個(gè)非常強(qiáng)大和豐富的OpenGL圖像處理開源庫,其中帶了部分濾鏡的實(shí)現(xiàn) ,對應(yīng)的高斯模糊濾鏡 為GPUImageGaussianBlurFilter,我們分析下它是如何實(shí)現(xiàn)的。
//頂點(diǎn)著色器
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
const int GAUSSIAN_SAMPLES = 9;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 textureCoordinate;
varying vec2 blurCoordinates[GAUSSIAN_SAMPLES];
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
// Calculate the positions for the blur
int multiplier = 0;
vec2 blurStep;
vec2 singleStepOffset = vec2(texelHeightOffset, texelWidthOffset);
for (int i = 0; i < GAUSSIAN_SAMPLES; i++)
{
multiplier = (i - ((GAUSSIAN_SAMPLES - 1) / 2));
// Blur in x (horizontal)
blurStep = float(multiplier) * singleStepOffset;
blurCoordinates[i] = inputTextureCoordinate.xy + blurStep;
}
}
//片源著色器
uniform sampler2D inputImageTexture;
const lowp int GAUSSIAN_SAMPLES = 9;
varying highp vec2 textureCoordinate;
varying highp vec2 blurCoordinates[GAUSSIAN_SAMPLES];
void main()
{
lowp vec3 sum = vec3(0.0);
lowp vec4 fragColor=texture2D(inputImageTexture,textureCoordinate);
sum += texture2D(inputImageTexture, blurCoordinates[0]).rgb * 0.05;
sum += texture2D(inputImageTexture, blurCoordinates[1]).rgb * 0.09;
sum += texture2D(inputImageTexture, blurCoordinates[2]).rgb * 0.12;
sum += texture2D(inputImageTexture, blurCoordinates[3]).rgb * 0.15;
sum += texture2D(inputImageTexture, blurCoordinates[4]).rgb * 0.18;
sum += texture2D(inputImageTexture, blurCoordinates[5]).rgb * 0.15;
sum += texture2D(inputImageTexture, blurCoordinates[6]).rgb * 0.12;
sum += texture2D(inputImageTexture, blurCoordinates[7]).rgb * 0.09;
sum += texture2D(inputImageTexture, blurCoordinates[8]).rgb * 0.05;
gl_FragColor = vec4(sum,fragColor.a);
}
通過著色器代碼我們看到GAUSSIAN_SAMPLES = 9;左右個(gè)4個(gè)采樣,加中心點(diǎn)1個(gè)采樣點(diǎn),即 2x4+1=9,是一個(gè)9x9的矩陣。
blurCoordinates存儲計(jì)算后的紋理的坐標(biāo)值。然后在片源著色器中進(jìn)行卷積運(yùn)算。
GPUImage采用了分別對X軸和Y軸的高斯模糊,這樣降低了算法的復(fù)雜度。
高斯濾波器的卷積核是二維的(mn),則算法復(fù)雜度為O(mnMN),復(fù)雜度較高,算法復(fù)雜度變?yōu)镺(2mM*N)
Render如下
public class GPUImageRender implements GLSurfaceView.Renderer {
private Context context;
private int inputTextureId;
private GPUImageGaussianBlurFilter blurFilter;
private FloatBuffer glCubeBuffer;
private FloatBuffer glTextureBuffer;
public static final float CUBE[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
public static final float TEXTURE_NO_ROTATION[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
public GPUImageRender(Context context) {
this.context = context;
}
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
String vertexStr = ShaderHelper.loadAsset(context.getResources(), "blur_vertex_gpuimage.glsl");
String fragStr = ShaderHelper.loadAsset(context.getResources(), "blur_frag_gpuimage.glsl");
blurFilter = new GPUImageGaussianBlurFilter(vertexStr,fragStr);
blurFilter.ifNeedInit();
inputTextureId = TextureHelper.loadTexture(context, R.drawable.bg);
glCubeBuffer = ByteBuffer.allocateDirect(CUBE.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer();
glCubeBuffer.put(CUBE).position(0);
glTextureBuffer = ByteBuffer.allocateDirect(TEXTURE_NO_ROTATION.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer();
glTextureBuffer.put(TEXTURE_NO_ROTATION).position(0);
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
blurFilter.onOutputSizeChanged(width,height);
}
@Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glClearColor(0f,0f,0f,1f);
blurFilter.onDraw(inputTextureId,glCubeBuffer,glTextureBuffer);
}
}
public class GPUImageTwoPassFilter extends GPUImageFilterGroup {
public GPUImageTwoPassFilter(String firstVertexShader, String firstFragmentShader,
String secondVertexShader, String secondFragmentShader) {
super(null);
addFilter(new GPUImageFilter(firstVertexShader, firstFragmentShader));
addFilter(new GPUImageFilter(secondVertexShader, secondFragmentShader));
}
}
public GPUImageGaussianBlurFilter(float blurSize,String vertexStr,String fragStr) {
super(vertexStr, fragStr, vertexStr, fragStr);
this.blurSize = blurSize;
}
完整代碼已上傳至github https://github.com/ayyb1988/mediajourney
其中用到上一篇談到的FBO技術(shù)
//com.av.mediajourney.opengl.gpuimage.GPUImageFilterGroup#onDraw
public void onDraw(final int textureId, final FloatBuffer cubeBuffer,
final FloatBuffer textureBuffer) {
runPendingOnDrawTasks();
if (!isInitialized() || frameBuffers == null || frameBufferTextures == null) {
return;
}
if (mergedFilters != null) {
int size = mergedFilters.size();
int previousTextureId = textureId;
for (int i = 0; i < size; i++) {
GPUImageFilter filter = mergedFilters.get(i);
boolean isNotLast = i < size - 1;
//如果不是最后一個(gè),則采用FBO方式,進(jìn)行離屏渲染;否則不掛載到FBO,直接渲染到屏幕
if (isNotLast) {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, frameBuffers[i]);
GLES20.glClearColor(0, 0, 0, 0);
}
//第一個(gè)filter,采用輸入的紋理id、頂點(diǎn)buffer、紋理buffer
if (i == 0) {
filter.onDraw(previousTextureId, cubeBuffer, textureBuffer);
} else if (i == size - 1) {
filter.onDraw(previousTextureId, glCubeBuffer, (size % 2 == 0) ? glTextureFlipBuffer : glTextureBuffer);
} else {
filter.onDraw(previousTextureId, glCubeBuffer, glTextureBuffer);
}
//如果不是最后一個(gè)filter,則解綁FBO,并且把當(dāng)前的輸出作為下一個(gè)filter的紋理輸入
if (isNotLast) {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
previousTextureId = frameBufferTextures[i];
}
}
}
}
詳細(xì)代碼請查看 github https://github.com/ayyb1988/mediajourney
高斯模糊后的效果如下:

三、高斯模糊優(yōu)化
在保證模糊效果的前提下,怎么樣可以提升模糊的效率,即減少耗時(shí),直接的影響因素就是運(yùn)算量的大小,可以從下面幾個(gè)方向進(jìn)行優(yōu)化:
減少偏移大?。:霃剑?/p>
優(yōu)化算法實(shí)現(xiàn)
先縮放圖片,再進(jìn)行高斯模糊,減少需要處理的數(shù)據(jù)量
了解GPU運(yùn)行方式,減少分支語句,使用opengl3.0等
** 減少偏移大?。:霃剑┖蛢?yōu)化算法實(shí)現(xiàn)見glsl
//頂點(diǎn)著色器
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
//const int GAUSSIAN_SAMPLES = 9;
//優(yōu)化點(diǎn):高斯算子的左右偏移,對應(yīng)的高斯算子為(SHIFT_SIZE*2+1)
const int SHIFT_SIZE =2;
uniform float texelWidthOffset;
uniform float texelHeightOffset;
varying vec2 textureCoordinate;
varying vec4 blurCoordinates[SHIFT_SIZE];
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
//偏移步距
vec2 singleStepOffset = vec2(texelHeightOffset, texelWidthOffset);
// int multiplier = 0;
// vec2 blurStep;
//
// for (int i = 0; i < GAUSSIAN_SAMPLES; i++)
// {
// multiplier = (i - ((GAUSSIAN_SAMPLES - 1) / 2));
// // Blur in x (horizontal)
// blurStep = float(multiplier) * singleStepOffset;
// blurCoordinates[i] = inputTextureCoordinate.xy + blurStep;
// }
// 優(yōu)化點(diǎn):減少循環(huán)運(yùn)算次數(shù)
for (int i=0; i< SHIFT_SIZE; i++){
blurCoordinates[i] = vec4(textureCoordinate.xy - float(i+1)*singleStepOffset,
textureCoordinate.xy + float(i+1)*singleStepOffset);
}
}
//片源著色器
uniform sampler2D inputImageTexture;
//const int GAUSSIAN_SAMPLES = 9;
//優(yōu)化點(diǎn):高斯算子的左右偏移,對應(yīng)的高斯算子為(SHIFT_SIZE*2+1)
const int SHIFT_SIZE =2;
varying highp vec2 textureCoordinate;
varying vec4 blurCoordinates[SHIFT_SIZE];
void main()
{
/*
lowp vec3 sum = vec3(0.0);
lowp vec4 fragColor=texture2D(inputImageTexture,textureCoordinate);
mediump vec3 sum = fragColor.rgb*0.18;
sum += texture2D(inputImageTexture, blurCoordinates[0]).rgb * 0.05;
sum += texture2D(inputImageTexture, blurCoordinates[1]).rgb * 0.09;
sum += texture2D(inputImageTexture, blurCoordinates[2]).rgb * 0.12;
sum += texture2D(inputImageTexture, blurCoordinates[3]).rgb * 0.15;
sum += texture2D(inputImageTexture, blurCoordinates[4]).rgb * 0.18;
sum += texture2D(inputImageTexture, blurCoordinates[5]).rgb * 0.15;
sum += texture2D(inputImageTexture, blurCoordinates[6]).rgb * 0.12;
sum += texture2D(inputImageTexture, blurCoordinates[7]).rgb * 0.09;
sum += texture2D(inputImageTexture, blurCoordinates[8]).rgb * 0.05;
gl_FragColor = vec4(sum,fragColor.a);*/
// 計(jì)算當(dāng)前坐標(biāo)的顏色值
vec4 currentColor = texture2D(inputTexture, textureCoordinate);
mediump vec3 sum = currentColor.rgb;
// 計(jì)算偏移坐標(biāo)的顏色值總和
for (int i = 0; i < SHIFT_SIZE; i++) {
sum += texture2D(inputTexture, blurShiftCoordinates[i].xy).rgb;
sum += texture2D(inputTexture, blurShiftCoordinates[i].zw).rgb;
}
// 求出平均值
gl_FragColor = vec4(sum * 1.0 / float(2 * SHIFT_SIZE + 1), currentColor.a);
}
** 先縮放圖片,再進(jìn)行高斯模糊,減少需要處理的數(shù)據(jù)量**
private static Bitmap getBitmap(Context context, int resourceId) {
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inScaled = false;
// Read in the resource
Bitmap bitmap = BitmapFactory.decodeResource(
context.getResources(), resourceId, options);
//優(yōu)化點(diǎn):對原圖進(jìn)行縮放,1/16的數(shù)據(jù)量 ,縮放大小根據(jù)具體場景而定
bitmap = Bitmap.createScaledBitmap(bitmap,
bitmap.getWidth() / 4,
bitmap.getHeight() / 4,
true);
return bitmap;
}

詳細(xì)代碼請查看 github https://github.com/ayyb1988/mediajourney
