About the author
JZG, Ctrip’s senior front-end development engineer, focusing on Android development;
zcc, Ctrip’s senior front-end development engineer, focuses on iOS development.
With the popularity of mobile short video, audio and video editing tools play an important role in making content apps. Rich transition methods can bring more cool effects to short videos, so as to better win the favor of users. This topic mainly contains a brief introduction to OpenGL and related API use, the basic use of the GLSL shader language, and how to achieve the transition effect of images by writing custom shader programs.
Second, why use OpenGL and the difficulty of using it
2.1 Why use OpenGL
The transition effect of video is inseparable from the processing of graphics, and mobile devices generally choose to use GPUs when processing 3D graphics-related calculations. Compared with the CPU, the GPU has more efficient performance in image animation processing. Mobile devices take android as an example, and GPU processing provides two different sets of APIs, namely Vulkan and OpenGL ES. Among them, VulKan only supports Android devices above 7.0, OpenGL ES supports all Android versions, and iOS does not have official support for vulkan. At the same time, OpenGL ES, as a subset of OpenGL, removes many non-essential features such as glBegin/glEnd, complex elements such as quads and polygons for embedded devices such as mobile phones, PDAs and game consoles, eliminating its redundancy function, thus providing a library that is easier to learn and easy to implement in mobile graphics hardware.
At present, OpenGL ES has become one of the most extensive GPU processing APIs in short video image processing with good system support and high degree of simplicity of functions. For convenience, OpenGL mentioned in this article stands for OpenGL ES.
2.2 Difficulties in using OpenGL to handle video transitions
The difficulty with using OpenGL to handle video transitions is how to write shaders for transition effects, and for that we can refer to the open source GLTransitions website. The site has a lot of open source transition effects, we can learn from and learn, the following will be more detailed introduction.
Third, the basic introduction and transition application of OpenGL
3.1 A basic introduction to OpenGL
OpenGL is an open graphics library for rendering 2D, 3D vector graphics for cross-language, cross-platform application programming interfaces. What can OpenGL be used for?
Video, graphics, image processing
2D/3D game engine development
Medical software development
CAD (Computer Aided Technology)
Virtual Reality (AR, VR)
AI artificial intelligence
We use OpenGL to handle video transitions, which is what we mentioned above for using OpenGL to process videos, graphics, images.
3.1.1 OpenGL rendering flow
When drawing with OpenGL, our main focus is on vertex shaders and fragment shaders. The vertex shader is used to determine the vertex position of the drawing graphic, and the fragment shader is responsible for adding color to the graphic. The main drawing process is as follows:
The rendering process is as follows:
1) Input of vertex data:
Vertex data is used to provide processed data for subsequent stages such as vertex shaders.
2) Vertex shader:
The main function of vertex shaders is to perform coordinate transformations.
3) Geometry shader:
Unlike vertex shaders, the input to a geometric shader is a complete entity (e.g., a point), the output can be one or more other entities (e.g., a triangular face), or no element is output, and the geometric shader is optional.
4) Element assembly, rasterization:
Element assembly assemblies the input vertices into specified elements, after the element assembly and screen mapping stage, we transform the object coordinates to the window coordinates, rasterization is a discretization process, the 3D continuous object into discrete screen pixels of the process.
5) Fragment shader (fragment shader):
The fragment shader is used to determine the final color of pixels on the screen.
6) Mixed test:
The final stage of rendering is the test blending phase. Tests include crop testing, alpha testing, template testing, and depth testing. Fragments that have not been tested are discarded and do not need to be blended, and the tested fragments enter the mixing phase.
After these steps, OpenGL can display the final graphic on the screen.
In the OpenGL drawing process, the ones we can code are Vertex Shader and Fragment Shader. These are also the 2 shaders that are necessary during the rendering process.
Vertex Shader processes data entered from the client, applies transformations, performs other types of mathematical operations to calculate lighting effects, displacements, color values, etc. For example, in order to render a triangle with a total of 3 vertices, Vertex Shader will execute 3 times, that is, once for each vertex.
The three vertices in the figure have been grouped together, and the triangles have been rasterized region by region. Each fragment is populated by executing Fragment Shader. Fragment Shader outputs the final color values we see on the screen.
When drawing graphics, we use OpenGL’s state variables, such as the current color, control the current view and projection transformations, line and polygon stippling modes, polygon drawing modes, pixel wrapping conventions, the position and characteristics of the lighting, and the material properties of the painted object. You can set its various states (or patterns) and then let those states take effect until you modify them again.
to set the current color to white, red, or any other color, after which all objects drawn after that will use that color until the current color is set to a different color again. Many of the state variables that represent patterns can be used with glEnable() and glDisable(). So we say OpenGL is a state machine.
Because OpenGL performs a series of operations sequentially during rendering processing, just like a pipeline job, we refer to the process of OpenGL drawing as the render pipeline, which includes fixed pipelines and programmable pipelines. We use a programmable pipeline, in the programmable pipeline, the vertex position, color, map coordinates, after the map is passed in, how to change the data, how to generate the generated fragments to generate results, can be very free to control.
Here’s a quick look at pipelines and the GLSL (Shader Language) that is essential in a variable programming pipeline.
Pipeline: The render pipeline can be understood as a render pipeline. It refers to the input of relevant description information data of the 3D object to be rendered (e.g. vertex coordinates, vertex color, vertex texture, etc.), and outputs a final image through a series of changes and rendering processes in the rendering pipeline. A simple understanding is the process of a bunch of raw graphic data passing through a pipeline, during which various changes are processed and finally appear on the screen. Pipelines are divided into two types: fixed pipelines and programmable pipelines.
Fixed Pipeline: During the rendering of an image, we can only implement a series of shader processing by calling the fixed pipeline effect of the GLShaderManager class.
Programmable pipeline: In the process of rendering an image, we are able to use custom vertex shaders and fragment shaders to process the data. Since OpenGL is so rich in use cases that it is impossible to fix pipelines or store shaders, we can use programmable pipelines to handle them.
3.1.3 GLSL（OpenGL Shading Language）
The OpenGL Shading Language is the language used to shade coding in OpenGL, that is, short custom programs written by developers that execute on the GPU (Graphic Processor Unit graphics processing unit) instead of a fixed part of the render pipeline, making the different levels of the render pipeline programmable. It can get the state in the current OpenGL, and the GLSL built-in variables are passed. GLSL uses C as the base high-order shading language, avoiding the complexity of using assembly language or hardware specification language.
The shader code for GLSL is divided into 2 parts: VertexShader (Vertex Shader) and Fragment Shader (Fragment Shader).
Shaders are editable programs used to implement image rendering, in place of fixed render pipelines. Among them, Vertex Shader (vertex shader) is mainly responsible for the operation of vertex geometry, etc., and Pixel Shader (pixel shader) is mainly responsible for the calculation of the source color.
Vertex Shader, a vertex shader
A vertex shader is a programmable processing unit that is typically used to handle vertex-related operations such as each vertex transformation (rotation/translation/projection, etc.), lighting, material application, and calculation. A vertex shader is a vertex-per-per-vertex program that is executed once for each vertex data. It replaces the vertex transformation and lighting calculation of the original fixed pipeline, and is developed using GLSL. We can use the shading language to develop our own vertex transforms, lighting and other functions according to our own needs, which greatly increases the flexibility of the program.
The vertex shader works by transferring the original vertex geometry information (vertex coordinates, color, texture) and other attributes to the vertex shader, and the customized vertex shader processes the vertex position information after the change, and passes the changed vertex position information to the subsequent assembly stage, and the corresponding vertex texture, color and other information are rasterized and passed to the chip element shader.
The input of the vertex shader is mainly the attribute, uniform, sampler and temporary variables corresponding to the vertex to be processed, and the output is mainly the varying generated after passing through the vertex shader and some built-in output variables.
Vertex shader sample code:
The above vertex shaders and fragment shaders appear in the attribute, varying, uniform and other type definitions, the following is a brief introduction to these three types.
attribute: The attribute variable is a variable that can only be used in the vertex shader, generally using the attribute variable to represent some vertex data, such as: vertex coordinates, normals, texture coordinates, vertex colors, etc.
The uniform:uniform variable is a variable passed to the shader by the external application program, and the uniform variable is like a constant in the C language, that is, the shader can only be used and cannot modify the uniform variable.
varying: The amount of money passed from the vertex shader to the fragment shader, such as the vertex color used to pass to the fragment shader, can be used as a variable.
Note: Attributes cannot be passed directly to Fragment Shader, if they need to be passed to Fragment Shader, they need to be passed on indirectly through Vertex Shader. Unifrom and Texture Data can be passed directly to Vertex Shader and Fragment Shader, depending on the requirements.
Vertex shaders and fragment shaders are described above, and how to pass data to OpenGL programs.
Now we will use some of the knowledge points just introduced to draw the picture to the screen through the OpenGL program, which is also the premise of making the picture carousel transition effect. The drawing of the image is the drawing of the texture for OpenGL, here only to show the effect, do not use the transformation matrix to deal with the aspect ratio of the image, directly cover the entire window.
Start by defining a vertex shader:
Then give the code for the Android side to draw an image texture using these two shaders:
This completes the drawing of an image:
What is a transition effect? Generally speaking, it is the transition between the two video screens. In opengl, the transition of the image is actually the transition of the two textures. An open source project is recommended here, which is mainly used to collect various GL transition effects and their GLSL implementation code, which developers can easily port to their own projects.
GLTransitions project website address
The GLTransitions project has close to 70 transition effects, which can be easily used in the transition of pictures or videos, many of which include common image processing methods such as mixing, edge detection, corrosion expansion, etc., from easy to difficult.
For students who want to learn GLSL, it is highly recommended to learn some high-level image processing methods GLSL implementation both quickly and learned.
Since glsl code is common across platforms, porting the effects of GLTransitions to mobile is also relatively straightforward. Now let’s take the first transition effect of the site as an example to introduce the general process of porting.
First, let’s look at the code of the fragment shader required for transition, which is the key to achieving transition. Among them, the sign function, mix function, fract function, and step function are the built-in functions of glsl. Here just to show the effect, do not use the transformation matrix to deal with the aspect ratio of the image, directly cover the entire window.
As we can see, the fragment shader code from GLTransitions already provides transitions, but some modifications are still required for the user. Taking the above code as an example, we need to define a variable progress of the transition (a floating-point number with values from 0 to 1). There are also two most basic elements of the transition, that is, the picture texture, a transition requires two picture textures, from texture 1 to texture 2, getToColor and getFromColor is a function of texture 1 and texture 2 color. Of course, there is also the essential main function that assigns the color calculated by our program to the gl_FragColor, so we have to modify the fragment shader code above. As follows:
Here is also given the code of the vertex shader, mainly to set the vertex coordinates and texture coordinates, about these two coordinates have been introduced above, here will not be repeated. The code is as follows:
Now that the two key shader programs are available, the vertex shader and the fragment shader, a basic transition is achieved. Just use these two shaders in our program and keep updating the progress of the two textures and transitions according to the current frame count as you draw.
The following gives the code logic when drawing, taking Android as an example:
That’s the basic process of porting a transition effect from a GLTransitions website to Android. iOS is also similar and very convenient.
With the introduction above, we have a simple understanding of how to use opengl to handle image transitions. But the operation just now can only make multiple pictures use the same kind of transition, which is more tedious. The following is an idea to combine different transition effects when compositing transition effects with multiple images.
Recall that when I first did the transition port, I just used an opengl program. Now let’s load multiple opengl programs, and then use the corresponding opengl programs in different time periods, so that it is easier to achieve the combination of multiple transition effects.
First define an IDrawer interface that represents an object that uses the opengl program:
Then define a render to control how these IDrawers are used:
Here, in order to facilitate the presentation of the process, the texture and the time taken for each transition (that is, the number of frames used) are written in the code using a fixed value. For example, now that there are four pictures numbered 1, 2, 3, and 4, we define three IDrawers A, B, C. A uses pictures 1 and 2, B uses pictures 2 and 3, and C uses pictures 3 and 4, and then each transition takes 200 frames, so that the combined transition of the three opengl programs can be achieved.
Here is one of the IDrawer’s implementation classes:
In this way, the purpose of combining multiple transitions can be achieved.
When it comes to graphics processing on the mobile side, OpenGL has been favored by everyone with its high efficiency and good compatibility.
This article briefly introduces the basic concepts and drawing process of OpenGL, so that everyone has a preliminary understanding of the drawing process of OpenGL. In the drawing process, it is important for us developers to use GLSL to write vertex shaders and fragment shaders. When using OpenGL to process image carousel transitions, the key point is to write the shaders needed for transitions, which we can refer to on the open source transitions of the GLTransitions website. The site offers rich transitions and shader code, which can be easily ported to the client.
For the implementation of complex transitions, that is, the combination of multiple transition effects, this article also provides an idea, that is, to combine multiple OpenGL programs, load and use the corresponding OpenGL programs at the corresponding point in time.
In view of the length of the article, this article shares some of our thoughts and practices on developing video transition effects based on OpenGL, hoping to help everyone, welcome more about the practice and exchange of audio and video editing.
A complex list of Taro performance optimizations
Taro cross-end solution of Ctrip Mini Program ecosystem
Ctrip’s front-end “openness” construction and exploration of the activity construction platform
Ctrip’s GraphQL-based front-end BFF service development practices
“Ctrip Technology” public account
Share, communicate, grow