This article deals with the use of bool in C++. Should we use it or not? That is the question we will try to answer here. However, this is more of an open discussion than a coding rule.
First of all, what is the bool type? A boolean variable is a variable that can be set to false or true.
Imagine you have a simple function to decide whether or not to buy an house, you may design it like this
Now you are happy, the reader knows exactly what bool means. Are you sure? A very thorough reader may notice that there is a reversal of the parameters.
How to solve the problem?
There is different ways to solve this type of problem. The first is to use a strong_type. There are many libraries that offer this kind of thing, however, a simple enum class can do the trick.
Not only will the reader know which argument corresponds to which parameter, but also, in the case of a parameter inversion, the compiler will not let the error pass
Let’s rewrite the function declaration:
enum class HouseWithSwimmingPool {No, Yes};
enum class HouseWithLights {Economical, Incandescent};
bool shouldBuyHouse(HouseWithSwimmingPool, HouseWithLights);
if(shouldBuyHouse(HouseWithSwimmingPool::Yes, HouseWithLights::Economical)) {
}
Conclusion
I would encourage people not to use the bool type for function parameters. What do you think? Do you use bool everywhere?
Hello !
This time I am going to talk about the Multi Draw Indirect (MDI) rendering. This feature allows you to enjoy both the purpose of multiDraw and indirect drawing.
Where does the overhead comes from?
Issuing a lot of commands
Issue a drawcall in GPU based rendering is a really heavy operation for the CPU. Knowing this, drawing a lot of models could be really expensive. A naive draw loop could be seemed like that:
Now, admit you want to use culling to improve performance. You know that if you perform it on the GPU side, you will be more efficient than if you use the CPU, but you don’t know how to use the result without passing data from the GPU to the CPU… This is where indirect drawing is efficient.
count specifies the number of elements (vertices) to be rendered primcount specifies the number of instances to be rendered (in our cases, it will be 0 or 1) first specifies the position of the first vertex firstIndex specifies the position of the first index baseVertex specifies the position of the first vertex baseInstance specifies the first instance to be rendered (a bit tricky, but I am going to explain that later).
How to Use it
These structures should be put into an OpenGL Buffer Object using the target GL_DRAW_INDIRECT_BUFFER.
Admit you have a big scene with, for 5000 distinct objects and 100 000 meshes. You must have:
5 000 matrices in a SSBO
“5 000” materials (not really true, but you understand the idea) in a SSBO
100 000 commands in your indirect buffer
A SSBO which contains bounding boxes data by meshes (to perform culling for each meshes).
Now, what you want is RENDER all the scene. The steps to do that are :
Fill matrices / materials / bouding boxes / indirect buffer
make a dispatch using a compute shader to perform culling
Issue a memory barrier
render
The first step is straightforward.
The second is easy, you use the indirect buffer as a SSBO in the compute shader and set the primCount value to 0 if the mesh is not visible or 1 instead
You are intending to issue an indirect command…
render.
Beautiful ! But how do I know which data I have to use?
The first way is to use gl_DrawIDARB which is pretty explicit.
The way we are going to see and the one I am advising, is to use the baseInstance from structures seen prior.
Why gl_DrawIDARB is not convenient? Simply because it is slower than the second way on most implementations, and because we will not be able to use ARB INDIRECT PARAMETERS with it.
So, for the second way, we must add one or several buffers to the prior list (two in our cases, one for indexing the matrix buffer, and one for indexing the material buffer). These buffers will contain integer values (the index of the matrix / material in their SSBO). Because they will be used through baseInstance, you understand that these buffers will be vertex buffers using a divisor through glVertexBindingDivisor.
A Caveat?
As you noticed, when you remove a command setting primCount to 0, the command is not really removed… Here is coming the extension ARB INDIRECT PARAMETERS. Instead of settings the primCount to 0, you let it to one, but if the mesh is not visible, you don’t add to the really used buffer command, using an atomic counter, you know exactly how many meshes should be rendered.
You have to bind the atomic buffer to GL_PARAMETER_BUFFER_ARB and use the functions
Hi!
Once again, I am going to present you some vulkan features, like pipelines, barriers, memory management, and all things useful for prior ones. This article will be long, but it will be separating into several chapters.
Memory Management
In Vulkan application, it is up to the developer to manage himself the memory. The number of allocations is limited. Make one allocation for one buffer, or one image is really a bad design in Vulkan. One good design is to make a big allocation (let’s call that a chunk), and manage it yourself, and allocate buffer or image within the chunk.
A Chunk Allocator
We need a simple object which has responsibility for allocations of chunks. It just has to select the good heap and call allocate and free from Vulkan API.
#include "chunkallocator.hpp"
#include "System/exception.hpp"
ChunkAllocator::ChunkAllocator(Device &device) : mDevice(device)
{
}
std::tuple<VkDeviceMemory, VkMemoryPropertyFlags, VkDeviceSize, char*>
ChunkAllocator::allocate(VkMemoryPropertyFlags flags, VkDeviceSize size) {
VkPhysicalDeviceMemoryProperties const &property = mDevice.memoryProperties();
int index = -1;
// Looking for a heap with good flags and good size
for(auto i(0u); i < property.memoryTypeCount; ++i)
if((property.memoryTypes[i].propertyFlags & flags) == flags)
if(size < property.memoryHeaps[property.memoryTypes[i].heapIndex].size)
index = i;
if(index == -1)
throw std::runtime_error("No good heap found");
VkMemoryAllocateInfo info = {};
info.sType = VK_STRUCTURE_TYPE_MEMORY_ALLOCATE_INFO;
info.pNext = nullptr;
info.allocationSize = size;
info.memoryTypeIndex = index;
// Perform the allocation
VkDeviceMemory mem;
vulkanCheckError(vkAllocateMemory(mDevice, &info, nullptr, &mem));
mDeviceMemories.push_back(mem);
char *ptr;
// We map the memory if it is host visible
if(flags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT)
vulkanCheckError(vkMapMemory(mDevice, mem, 0, VK_WHOLE_SIZE, 0, (void**)&ptr));
return std::tuple<VkDeviceMemory, VkMemoryPropertyFlags, VkDeviceSize, char*>
(mem, flags, size, ptr);
}
ChunkAllocator::~ChunkAllocator() {
// We free all memory objects
for(auto &mem : mDeviceMemories)
vkFreeMemory(mDevice, mem, nullptr);
}
This piece of code is quite simple and easy to read.
Memory Pool
Memory pools are structures used to optimize dynamic allocation performances. In video games, it is not an option to use a memory pool. Ideas are the same I told in the first part. Allocate a chunk, and sub allocate yourself within the chunk. I made a simple generic memory pool.
There is a little scheme which explains what I wanted to do.
As you can see, video memory is separated into several parts (4 here) and each “Block” in the linked list describes one sub-allocation.
One block is described by :
Size of the block
Offset of the block relatively with the DeviceMemory
A pointer to set data from the host (map)
Boolean to know about the freeness of the block
A sub-allocation within a chunk is performed as follows :
Traverse the linked list until we find a well-sized free block
Modify the size and set the boolean to false
Create a new block, set size, offset and put boolean to true and insert it after the current one.
A free is quite simple, you just have to put the boolean to true.
A good other method could be a “shrink to fit”. If there are some following others with the boolean set to true, we merge all blocks into one.
#include "memorypool.hpp"
#include <cassert>
MemoryPool::MemoryPool(Device &device) :
mDevice(device), mChunkAllocator(device) {}
Allocation MemoryPool::allocate(VkDeviceSize size, VkMemoryPropertyFlags flags) {
if(size % 128 != 0)
size = size + (128 - (size % 128)); // 128 bytes alignment
assert(size % 128 == 0);
for(auto &chunk: mChunks) {
// if flags are okay
if((chunk.flags & flags) == flags) {
int indexBlock = -1;
// We are looking for a good block
for(auto i(0u); i < chunk.blocks.size(); ++i) {
if(chunk.blocks[i].isFree) {
if(chunk.blocks[i].size > size) {
indexBlock = i;
break;
}
}
}
// If a block is find
if(indexBlock != -1) {
Block newBlock;
// Set the new block
newBlock.isFree = true;
newBlock.offset = chunk.blocks[indexBlock].offset + size;
newBlock.size = chunk.blocks[indexBlock].size - size;
newBlock.ptr = chunk.blocks[indexBlock].ptr + size;
// Modify the current block
chunk.blocks[indexBlock].isFree = false;
chunk.blocks[indexBlock].size = size;
// If allocation does not fit perfectly the block
if(newBlock.size != 0)
chunk.blocks.emplace(chunk.blocks.begin() + indexBlock + 1, newBlock);
return Allocation(chunk.memory, chunk.blocks[indexBlock].offset, size, chunk.blocks[indexBlock].ptr);
}
}
}
// if we reach there, we have to allocate a new chunk
addChunk(mChunkAllocator.allocate(flags, 1 << 25));
return allocate(size, flags);
}
void MemoryPool::free(Allocation const &alloc) {
for(auto &chunk: mChunks)
if(chunk.memory == std::get<0>(alloc)) // Search the good memory device
for(auto &block : chunk.blocks)
if(block.offset == std::get<1>(alloc)) // Search the good offset
block.isFree = true; // put it to free
}
void MemoryPool::addChunk(const std::tuple<VkDeviceMemory, VkMemoryPropertyFlags, VkDeviceSize, char *> &ptr) {
Chunk chunk;
Block block;
// Add a block mapped along the whole chunk
block.isFree = true;
block.offset = 0;
block.size = std::get<2>(ptr);
block.ptr = std::get<3>(ptr);
chunk.flags = std::get<1>(ptr);
chunk.memory = std::get<0>(ptr);
chunk.size = std::get<2>(ptr);
chunk.ptr = std::get<3>(ptr);
chunk.blocks.emplace_back(block);
mChunks.emplace_back(chunk);
}
Buffers
Buffers are a well-known part in OpenGL. In Vulkan, it is approximately the same, but you have to manage yourself the memory through one memory pool.
When you create one buffer, you have to give him a size, an usage (uniform buffer, index buffer, vertex buffer, …). You also could ask for a sparse buffer (Sparse resources will be a subject of an article one day ^_^). You also could tell him to be in a mode concurrent. Thanks to that, you could access the same buffer through two different queues.
I chose to have a host visible and host coherent memory. But it is not especially useful. Indeed, to achieve a better performance, you could want to use a non coherent memory (but you will have to flush/invalidate your memory!!).
For the host visible memory, it is not especially useful as well, indeed, for indirect rendering, it could be smart to perform culling with the GPU to fill all structures!
Shaders
Shaders are Different parts of your pipelines. It is an approximation obviously. But, for each part (vertex processing, geometry processing, fragment processing…), shader associated is invoked. In Vulkan, shaders are wrote with SPIR-V.
SPIR-V is “.class” are for Java. You may compile your GLSL sources to SPIR-V using glslangvalidator.
Why is SPIR-V so powerful ?
SPIR-V allows developers to provide their application without the shader’s source.
SPIR-V is an intermediate representation. Thanks to that, vendor implementation does not have to write a specific language compiler. It results in a lower complexity for the driver and it could more optimize, and compile it faster.
Shaders in Vulkan
Contrary to OpenGL’s shader, it is really easy to compile in Vulkan.
My implementation keeps in memory all shaders into a hashtable. It lets to prevent any shader’s recompilation.
Pipelines are objects used for dispatch (compute pipelines) or render something (graphic pipelines).
The beginning of this part is going to be a summarize of the Vulkan’s specs.
Descriptors
Shaders access buffer and image resources through special variables. These variables are organized into a set of bindings. One set is described by one descriptor.
Descriptor Set Layout
They describe one set. One set is compound with an array of bindings. Each bindings are described by :
A binding number
One type : Image, uniform buffer, SSBO, …
The number of values (Could be an array of textures)
Stage where shader could access the binding.
Allocation of Descriptor Sets
They are allocated from descriptor pool objects.
One descriptor pool object is described by a number of set allocation possible, and an array of descriptor type / count it can allocate.
Once you have the descriptor pool, you could allocate from it sets (using both descriptor pool and descriptor set layout).
When you destroy the pool, sets also are destroyed.
Give buffer / image to sets
Now, we have descriptors, but we have to tell Vulkan where shaders can get data from.
Pipeline Layouts
Pipeline layouts are a kind of bridge between the pipeline and descriptor sets. They let you manage push constant as well (we’ll see them in a future article).
Implementation
Since descriptor sets are not coupled with pipelines layout. We could separate pipeline layout and descriptor pool / sets, but currently, I prefer to keep them coupled. It is a choice, and it will maybe change in the future.
I am going to explain quickly what memory barriers are.
The idea behind the memory barrier is ensured writes are performed.
When you performed one compute or one render, it is your duty to ensure that data will be visible when you want to re-use them.
In our main.cpp example, I draw a triangle into a frame buffer and present it.
Image barriers are compound with access, layout, and pipeline barrier with stage.
Since the presentation is a read of a framebuffer, srcAccessMask is VK_ACCESS_MEMORY_READ_BIT.
Now, we want to render inside this image via a framebuffer, so dstAccessMask is VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT.
We were presented the image, and now we want to render inside it, so, layouts are obvious.
When we submit image memory barrier to the command buffer, we have to tell it which stages are affected. Here, we wait for all commands and we begin for the first stage of the pipeline.
The only difference is the order and stageMasks. Here we wait for the color attachement (and not the Fragment one !!!!) and we begin with the end of the stages (It is not really easy to explain… but it does not sound not logic).
Steps to render something using pipelines are:
Create pipelines
Create command pools, command buffer and begin them
Create vertex / index buffers
Bind pipelines to their subpass, bind buffers and descriptor sets