Talk about abstraction of cross-platform graphics API

Original link: https://blog.gotocoding.com/archives/1737?utm_source=rss&utm_medium=rss&utm_campaign=%25e8%25b0%2588%25e8%25b0%2588%25e8%25b7%25a8%25e5%25b9%25b3%25e5% 258f%25b0%25e5%259b%25be%25e5%25bd%25a2api%25e7%259a%2584%25e6%258a%25bd%25e8%25b1%25a1

Originally, according to the plan in March, the basic mode of King Glory was copied first , and based on this, a set of general client framework based on Lua was abstracted, and then slowly optimized according to the needs.

However, at the end of March, the GAMES series released a new course, GAMES104, “Modern Game Engines: From Introduction to Practice”.

This class suddenly ignited my interest, so I decided to suspend the development plan of the client-side framework. After learning GAMES104, come back to continue developing the client-side framework.

After years of observation. I found that due to computing power, many advanced technologies are always selected for PC games, and then many years later. It was only used for mobile game development (sometimes even requires various tricks to run). Therefore, if you want to learn and experience the latest engine technology, it is best to use the terminal game engine.

I plan to take advantage of this GAMES104 course to write my own engine.

This engine should use the latest technology and latest hardware features.

The business logic language of this engine is Lua. In terms of expressiveness, Lua is much stronger than C and C++. Although the performance will be slower, because it is an experimental engine, it is more important to develop quickly.

This engine should be cross-platform . Although my main goal is PC games, I also hope that I can play on my phone if the computing power of my phone allows.

It took me a week to copy the example on the vulkan tutorial (drawing a triangle, I copied it for 3 and a half days ^_^!).

Then started to implement the engine according to the video course of GAMES104.

Although the first version of the engine is based on the Vulkan graphics API, I still hope to abstract a similar RHI (Render Hardware Interface) first to lay the foundation for supporting Direct3D and Metal in the future.

It’s hard for me because I don’t have any foundation in Direct3D or Metal, and I only have a week of experience with Vulkan.

I still want to try it.


One of the easiest solutions to think of is to design the same interface and the same export structure for all graphics APIs, and then use macros to switch platforms, which is exactly what RHI is on the surface.

The pseudocode is as follows:

 //-----rhi/texture.h------ namespace rhi { gpu_handle texture_create(); texture_destroy(gpu_handle handle); } //-----rhi/texture.cpp------- #ifdef RHI_VULKAN #include "vulkan/texture.cpp" #elseif RHI_DIRECT3D #include "direct3d/texture.cpp" #elseif RHI_METAL #include "metal/texture.cpp" #endif //-----render/texture2d.h namespace render { class texture2d { public: texture2d() { gpu_texture = texture_create(); } ~texture2d() { texture_destroy(gpu_texture); } private: gpu_handle gpu_texture; int width; int height; } }

But doing so presents some thorny issues.

Taking texture2d as an example, when using texture at the Vulkan level, in some cases, you need to use attributes such as width and height, write_enable, filter_mode, etc. How to get these attributes.

At this point there are three options:

  • The first solution: pass all the required parameters when calling rhi::texture_create(), and then the Vulkan layer saves it internally for later use. This has two disadvantages: severe data redundancy and extra code to synchronize properties between texture2d and gpu_texture.

  • The second scheme: When calling rhi::texture_create(), directly pass the this pointer of texture2d in, and the Vulkan layer binds gpu_texture and this internally. The Vulkan layer can query the pointer of texture2d and read the relevant setting information through this binding relationship when operating gpu_texture internally. This also has disadvantages. First of all, it will generate circular references. In the render layer, texture_2d refers to gpu_texture, and in the vulkan layer, gpu_texture refers to texture2d. Then, because the parameters of rhi::texture_create have types, it is necessary to create a Add a texture_create/texture_destroy interface to various textures (textur2d, texture3d, cubemap), etc.

  • The third scheme: On the basis of the second scheme, the circular reference can be cut off by removing the existence of gpu_handle.

The pseudocode is as follows:

 //-----rhi/texture.h------ namespace rhi { bool texture_create(texture2d *tex); void texture_destroy(texture2d *tex); } //-----rhi/texture.cpp------- #ifdef RHI_VULKAN #include "vulkan/texture.cpp" #elseif RHI_DIRECT3D #include "direct3d/texture.cpp" #elseif RHI_METAL #include "metal/texture.cpp" #endif //-----render/texture2d.h namespace render { class texture2d { public: texture2d() { rhi::texture_create(this); } ~texture2d() { rhi::texture_destroy(this); } private: int width; int height; } }

When rhi::texture_create is called, the Vulkan layer will create a texture GPU resource, and bind this GPU resource to the texture2d pointer, but this binding is not exported to the external interface for use.

When subsequently operating a certain GPU resource, you can directly use the texture2d pointer.

As for the binding method, there are various ways, the most simple and direct way is to use unordered_map (obviously the performance is not too high).

The third scheme and the second scheme have a common problem, that is, a texture2d resource needs at least two objects to represent at the same time, the texture2d of the render layer and the gpu_texture2d of the vulkan layer, which will cause memory fragmentation problems.


It took 2 weeks to refactor many times, but I was not satisfied.

During the refactoring process, I came up with a whole new idea.

The pseudocode is as follows:

 //-----render/texture2d.h namespace render { class texture2d { pubilc: static texture2d *create(int width, int height); static void destroy(texture2d *tex); protected: texture2d() {} ~texture2d() {} protected: gpu_handle gpu_texture; int width; int height; } } //-----vulkan/vk_texture2d.h namespace vulkan { class vk_texture2d : public render::texture2d { public: vk_texture2d(int width, int height) : texture2d() { //todo some gpu create } ~vk_texture2d() { ~texture2d() //release some gpu resource } private: //some GPU-related resource } } //-----vulkan/vk_factory.h namespace render { texture2d *texture2d::create(int width, int height) { return new vk_texture2d(width, height); } void texture2d::destroy(texture2d *tex) { delete (vk_texture2d *)tex; } }

In this abstraction, I completely removed all RHI related middle layers. And almost all the shortcomings of the above solutions are solved: memory fragmentation does not exist, circular references do not exist, and the glue logic of data on the GPU and CPU sides disappears.

Of course, this abstraction also has its own shortcomings:

However, compared to the problems he can solve, I don’t think these two problems are big problems.

The business logic is done in Lua, so new will not be used to create rendering objects.

Using less or not using inheritance is a principle I have always adhered to.

Finally, the complete code is attached

The post talking about abstractions for cross-platform graphics APIs first appeared on the Return to Chaos Blog .

This article is reproduced from: https://blog.gotocoding.com/archives/1737?utm_source=rss&utm_medium=rss&utm_campaign=%25e8%25b0%2588%25e8%25b0%2588%25e8%25b7%25a8%25e5%25b9%25b3%25e5% 258f%25b0%25e5%259b%25be%25e5%25bd%25a2api%25e7%259a%2584%25e6%258a%25bd%25e8%25b1%25a1
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment