Tag: C++

  • Range : Be expressive using smart iterators with Range based containers

    Hi !
    Today I am not going to talk about rendering. This article will deal with expressiveness in C++. Expressiveness? Yes, but about containers, range, and iterators.
    If you want to know more about writing expressive code in C++, I advise you to go on fluentcpp.
    If you want to know more about range, I advise you to take a look at the Range V3 written by Eric Niebler.
    The code you will see may not be the most optimized, but it gives one idea behind what ranges are and how to implement it.

    Introduction

    How could we define a Range ?

    The objective

    Prior to defining what a Range is, we are going to see what Range let us do.

    int main()
    {
        std::list<int> list;
        std::vector<float> vector = {5.0, 4.0, 3.0, 2.0, 1.0, 0.0};
        list << 10 << 9 << 8 << 7 << 6 << vector;
        // list = 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0
    
        auto listFiltered = list | superiorThan(4) | multiplyBy(3);
        // type of listFiltered = Range<You do not want to know lol>
        // listFiltered = [10, 9, 8, 7, 6, 5] -> 30, 27, 24, 21, 18, 15
    
        auto listSorted = Range::sort(listFiltered | superiorThan(23));
        // type of listSorted is vector, else use Range::sort<std::list>
        // listSorted = [30, 27, 24] -> 24, 27, 30
    
        std::cout << list << listFiltered << listSorted;
    
        return 0;
    }

    Isn’t it amazing to write things like that? Okay for direct operation inside the container, it could be better in two ways:

    1. It is not “easy” to read if you want to compose operation : (unique(sort(range)) is less readable than range | sort | unique in my opinion. But it is juste one “optimisation” to do :).
    2. It may be not optimized since sort returns a Container : here a vector, and build also.

    The overloading of operator<< is quite easy though:

    // writing
    template<template<typename, typename...> class Container, typename T, typename ...A>
    std::ostream &operator<<(std::ostream &stream, Container<T, A...> const &c) {
        for(auto const &e : c)
            stream << e << " ";
        stream << std::endl;
        return stream;
    }
    
    // Appending
    template<template<typename, typename...> class Container, typename T, typename ...A>
    Container<T, A...> &operator<<(Container<T, A...> &c, T const &v) {
        c.emplace_back(v);
        return c;
    }
    
    // Output must not be an ostream
    template<template<typename, typename> class Output, template<typename, typename> class Input,
             typename T1, typename A1, typename T2, typename A2>
    std::enable_if_t<!std::is_base_of<std::ostream, Output<T1, A1>>::value, Output<T1, A1>&>
    operator<<(Output<T1, A1> &o, Input<T2, A2> const &i) {
        std::copy(i.begin(), i.end(), std::back_inserter(o));
        return o;
    }
    
    template<template<typename, typename> class Output, template<typename> class Range,
             typename T1, typename A1, typename Iterator>
    std::enable_if_t<!std::is_base_of<std::ostream, Output<T1, A1>>::value, Output<T1, A1>&>
    operator<<(Output<T1, A1> &o, Range<Iterator> const &i) {
        std::copy(i.begin(), i.end(), std::back_inserter(o));
        return o;
    }

    Okay, there are a lot of templates. I hope you will not get an allergy to them. All the lines that follow will use and abuse of templates and SFINAE.

    The definition of a Range

    A range is a way to traverse a container. They are at one abstraction above of iterators. To be simple, a Range own the first iterator, and the final one. It comes with a begin and one end function.

    template<typename Iterator>
    class _Range {
    public:
        using __IS_RANGE = void; // helper To know if the type is a range or not
    public:
        using const_iterator = Iterator;
        using value_type = typename const_iterator::value_type;
        explicit _Range(const_iterator begin, const_iterator end) : mBegin(begin), mEnd(end){}
    
        const_iterator begin() const {return mBegin;}
        const_iterator end() const {return mEnd;}
    private:
        const_iterator mBegin;
        const_iterator mEnd;
    };
    
    template<typename T, typename = void>
    struct is_range : std::false_type{};
    
    template<typename T>
    struct is_range<T, typename T::__IS_RANGE> : std::true_type{};
    
    
    template<typename Iterator>
    auto Range(Iterator begin, Iterator end) {
        return _Range<Iterator>(begin, end);
    }
    
    template<typename Container>
    auto Range(Container const &c) {
        return Range(c.begin(), c.end());
    }

    Smart (Or proxy) iterator

    Okay, now things are becoming tricky. I hope that I lost no one.

    What exactly is an iterator ?

    To be simple, an iterator is an abstraction of a pointer.
    It exists several catetegories of iterators, to be simple, here is the list:

    1. input : They can be compared, incremented, and dereferenced as rvalue (output ones can be dereferenced as lvalue).
    2. Forward : not a lot of difference with the prior.
    3. Bidirectional: They can be decremented.
    4. Random Access : They support arithmetic operators + and -, inequality comparisons.

    Smart Iterator in details

    Lazy Initialization

    This statement tells us “If the result of the operation is not needed right now, there is no need to compute it”. To be simple, the operation will be done when we will need to get the result. With iterator, it could be done when you dereference it for instance.

    Different types of smart iterator

    Filter iterator

    This iterator will jump a lot of values that do not respect a predicate. For example, if you want only the odd values of your container, when you increment the iterator, it will advance until the next odd value and will skip all the even ones.

    Transform iterator

    This iterator will dereference the iterator, apply a function to the dereferenced value and returns the result.

    Implementation

    Basics

    Here we are going to implement our own iterator class. This class must be templated twice times.
    The first template argument is the iterator we want to iterate on. The second template argument is a tag that we use to perform a kind of tag dispatching.
    Moreover, this iterator must behave as … an iterator !

    So, we begin to write :

    template<class Iterator, class RangeIteratorTagStructure>
    class RangeIterator {
        Iterator mIt;
        RangeIteratorTagStructure mTag;
    public:
        using iterator_category = typename Iterator::iterator_category;
        using value_type = typename Iterator::value_type;
        using difference_type = typename Iterator::difference_type;
        using pointer = typename Iterator::pointer;
        using reference = typename Iterator::reference;

    One of above typename will fail if Iterator does not behave like one iterator.
    The Tag has a constructor and can own several data (function / functor / lambda), other iterators(the end of the range?) or other things like that.

    The iterator must respect the Open Closed Principle. That is why you must not implement the methods inside the class but outside (in a namespace detail for instance). We are going to see these methods later. To begin, we are going to stay focused on the RangeIterator class.

    Constructors

    We need 3 constructors.
    1. Default constructor
    2. Variadic templated constructor that build the tag
    3. Copy constructor

    And we need as well an assignment operator.

    RangeIterator() = default;
    
    template<typename ...Args>
    RangeIterator(Iterator begin, Args &&...tagArguments) :
        mIt(begin),
        mTag(std::forward<Args>(tagArguments)...) {
        detail::RangeIterator::construct(mIt, mTag);
    }
    
    RangeIterator(RangeIterator const &i) :
        mIt(i.mIt), mTag(i.mTag){}
    
    RangeIterator &operator=(RangeIterator f) {
        using std::swap;
        swap(f.mIt, this->mIt);
        swap(f.mTag, this->mTag);
        return *this;
    }

    There are no difficulties here.

    They also need comparison operators !

    And they are quite easy !

    bool operator !=(RangeIterator const &r) {
        return mIt != r.mIt;
    }
    
    bool operator ==(RangeIterator const &r) {
        return mIt == r.mIt;
    }

    Reference or value_type dereferencement

    I hesitate a lot between return either a reference or a copy. To make transform iterator easier, I make the return by copy.
    It means that you cannot dereference them as a lvalue :

    *it = something; // Does not work.

    The code is a bit tricky now because the dereferencement could not be the value you are waiting for. See the std::back_insert_iterator for instance.

    decltype(detail::RangeIterator::dereference(std::declval<Iterator>(), std::declval<RangeIteratorTagStructure>())) operator*() {
        return detail::RangeIterator::dereference(mIt, mTag);
    }
    
    decltype(detail::RangeIterator::dereference(std::declval<Iterator>(), std::declval<RangeIteratorTagStructure>())) operator->() {
        return detail::RangeIterator::dereference(mIt, mTag);
    }

    Forward iterator to go farther !

    Again, simple code !

    RangeIterator &operator++() {
        detail::RangeIterator::increment(mIt, mTag);
        return *this;
    }

    Backward iterator, to send you in hell !

    Okay, now as promised, we are going to see how beautiful C++ templates are. If you don’t want to be driven crazy, I advise you to stop to read here.
    So, we saw that not all iterators have the “backward” range. The idea is to enable this feature ONLY if the iterator (the first template argument) supports it also.
    It is the moment to reuse SFINAE (the first time was for the “is_range” structure we saw above).
    We are going to use the type_trait std::enable_if<Expr, type>.
    How to do that?

    template<class tag = iterator_category>
    std::enable_if_t<std::is_base_of<std::bidirectional_iterator_tag, tag>::value,
    RangeIterator>
    &operator--() {
        detail::RangeIterator::decrement(mIt, mTag);
        return *this;
    }

    You MUST template this function, else the compiler can not delete it !!!

    FYI : If you have C++17 enabled, you can use concepts (at least for GCC).

    Random iterator

    Now you can do it by yourself.
    But here some code to help you (because I am a nice guy :p)

    template<class tag = iterator_category>
    std::enable_if_t<std::is_base_of<std::random_access_iterator_tag, tag>::value, RangeIterator>
    &operator+=(std::size_t n) {
        detail::RangeIterator::plusN(mIt, n, mTag);
        return *this;
    }
    
    template<class tag = iterator_category>
    std::enable_if_t<std::is_base_of<std::random_access_iterator_tag, tag>::value, RangeIterator>
    operator+(std::size_t n) {
        auto tmp(*this);
        tmp += n;
        return tmp;
    }
    
    template<class tag = iterator_category>
    std::enable_if_t<std::is_base_of<std::random_access_iterator_tag, tag>::value, difference_type>
    operator-(RangeIterator const &it) {
        return detail::RangeIterator::minusIterator(mIt, it.mIt, mTag);
    }
    
    template<class tag = iterator_category>
    std::enable_if_t<std::is_base_of<std::random_access_iterator_tag, tag>::value, bool>
    operator<(RangeIterator const &f) {
        return mIt < f.mIt;
    }
    
    // Operator a + iterator
    template<template<typename, typename> class RIterator, typename iterator, typename tag, typename N>
    std::enable_if_t<std::is_base_of<std::random_access_iterator_tag, typename iterator::iterator_category>::value,
    RIterator<iterator, tag>> operator+(N n, RIterator<iterator, tag> const &it) {
        auto tmp(it);
        tmp += n;
        return tmp;
    }

    Details

    Okay, now we are going to see what is hiden by detail::RangeIterator.

    Normal iterators

    In this namespace, you MUST put the tag and the function on it.

    Here are the functions for normal iterator.

    /*********** NORMAL ************/
    template<typename Iterator, typename Tag>
    inline void construct(Iterator , Tag) {
    
    }
    
    template<typename Iterator, typename Tag>
    inline typename Iterator::value_type dereference(Iterator it, Tag) {
        return *it;
    }
    
    template<typename Iterator, typename Tag>
    inline void increment(Iterator &it, Tag) {
        ++it;
    }
    
    template<typename Iterator, typename Tag>
    inline void decrement(Iterator &it, Tag) {
        --it;
    }
    
    template<typename Iterator, typename Tag>
    inline void plusN(Iterator &it, std::size_t n, Tag) {
        it += n;
    }
    
    template<typename Iterator, typename Tag>
    inline void minusN(Iterator &it, std::size_t n, Tag) {
        it -= n;
    }
    
    template<typename Iterator, typename Tag>
    inline typename Iterator::difference_type minusIterator(Iterator i1, Iterator const &i2, Tag) {
        return i1 - i2;
    }

    It is simple, if it is a normal iterator, it behaves like a normal one.

    Transform iterator

    I will not talk about the filter iterator since it is not complicated to make it once we understand the ideas. Just be careful about the construct function…

    The tag

    So, what is a Transform iterator? It is simply one iterator that dereference the value, and apply a function to it.
    Here is the Tag structure.

    template<typename Iterator, typename Functor>
    struct Transform final {
        Transform() = default;
        Transform(Functor f) : f(f){}
        Transform(Transform const &f) : f(f.f){}
    
        std::function<typename Iterator::value_type(typename Iterator::value_type)> f;
    };

    It owns one std::function and that’s it.

    The usefulness of the transform iterator is when you dereference it. So you need to reimplement only the dereference function.

    template<typename Iterator, typename Functor>
    inline typename Iterator::value_type dereference(Iterator it, Transform<Iterator, Functor> f) {
        return f.f(*it);
    }

    Thanks to overloading via tag dispatching this function should (must??) be called without any issues (actually you hope :p).

    However, if you want to use several files (thing that I only can to advise you), you cannot do it by this way but specialize your templates. But you cannot partially specialize template function. The idea is to use functor!

    Here is a little example using dereference function.

    decltype(std::declval<detail::RangeIterator::dereference<Iterator, Tag>>()(std::declval<Iterator>(), std::declval<Tag>())) operator*() {
        return detail::RangeIterator::dereference<Iterator, Tag>()(mIt, mTag);
    }
    
    // Normal iterator
    template<typename Iterator, typename Tag>
    struct dereference {
        inline typename Iterator::value_type operator()(Iterator it, Tag) const {
            return *it;
        }
    };
    
    // Transform iterator
    template<typename Iterator, typename Functor>
    struct dereference<Iterator, Transform<Iterator, Functor>> {
        inline typename Iterator::value_type operator()(Iterator it, Transform<Iterator, Functor> f) {
            return f.f(*it);
        }
    };
    The builder : pipe operator (|)

    Okay, you have the iterator, you have the range class, you have your function, but now, how to gather them?

    What you want to write is something like that:

    auto range = vector | [](int v){return v * 2;};

    First, you need a function that Create one range that own two iterators.
    One that begins the set, and the other one that ends it.

    template<typename Container, typename Functor>
    auto buildTransformRange(Container const &c, Functor f) {
        using Iterator = RangeIterator<typename Container::const_iterator,
                                       detail::RangeIterator::Transform<typename Container::const_iterator, Functor>>;
        Iterator begin(c.begin(), f);
        Iterator end(c.end(), f);
        return Range(begin, end);
    }
    

    Once you have that, you want to overload the pipe operator that makes it simple :

    template<typename R, typename Functor>
    auto operator|(R const &r, Functor f) -> std::enable_if_t<std::is_same<std::result_of_t<Functor(typename R::value_type)>, typename R::value_type>::value, decltype(Range::buildTransformRange(r, f))> {
        return Range::buildTransformRange(r, f);
    }

    Warning : Don’t forget to take care about rvalue references to be easy to use !

    Conclusion

    So this article presents a new way to deal with containers. It allows more readable code and take a functional approach. There is a lot of things to learn about it, so don’t stop your learning here. Try to use one of the library below, try to develop yours. Try to learn a functional language and … Have fun !!!!

    I hope that you liked this article. It is my first article that discuss only C++. It may contains a lot of errors, if you find one or have any problems, do not forget to tell me!

    Reference

    Range V3 by Eric Niebler : His range library is really powerfull and I advice you to use it (and I hope that it will be a part of the C++20).
    Ranges: The STL to the Next Level : because of (thanks to?) him, I am doing a lot of modifications in all my projects… x).
    Range Library by me : I will do a lot of modifications : Performance, conveniance and others.

  • Barriers in Vulkan : They are not that difficult

    Hi !
    Yes, I know, I lied, I said that my next article will be about buffers or images, but, finally, I’d prefer to talk about barriers first. However, barriers are, IMHO, a really difficult thing to well understand, so, this article might countain some mistakes.
    In that case, please, let me know it by mail, or by one comment.
    By the way, this article could remind you in some parts the article on GPU Open : Performance Tweets series: Barriers, fences, synchronization and Vulkan barriers explained

    What memory barriers are for?

    Memory barriers are source of bugs.
    More seriously, barriers are used for three (actually four) things.

    1. Execution Barrier (synchronization) : To ensure that prior commands has finished
    2. Memory Barrier (memory visibility / availability): To ensure that prior writes are visible
    3. Layout Transitioning (useful for image) : To Optimize the usage of the resource
    4. Reformatting

    I am not going to talk about reformating because (it is a shame) I am not very confident with it.

    What exactly is an execution barrier ?

    An execution barrier could remind you mutex on CPU thread. You write something in one resource. When you want to read what you write in, you must wait the write is finished.

    What exactly is a memory barrier ?

    When you write something from one thread, it could write it on some caches and you must flush them to ensure the visibility where you want to read that data. That is what memory barriers are for.
    They ensure as well layout transition for image to get the best performance your graphic card can.

    How it is done in Vulkan

    Now that we understand why barriers are so important, we are going to see how can we use them in Vulkan.

    Vulkan’s Pipeline

    Vulkan Pipeline

    To be simple, the command enters in the top_of_pipe stage and end at bottom_of_pipe stage.
    It exists an extra stage that refers to the host.

    Barriers between stages

    We are going to see two examples (that are inspired from GPU Open).
    We will begin with the worse case : your first command writes at each stage everywhere it is possible, your second command reads at each stage everywhere it is possible.
    It simply means that you want to wait for the first command totally finish before the second one begin.

    To be simple, with a scheme it means that :
    barriers-all_to_all

    • In gray : All the stages that need to be executed before or after the barrier (or the ones that are never reached)
    • In red : Above the barrier, it means where the data are produced. Below the barrier, it means where the data are consumed.
    • In green : They are unblocked stages. You should try to have the maximum green stages as possible.

    As you can see, here, you don’t have any green stages, so it is not good at all for performances.

    In Vulkan C++, you should have something like that:

    cmd.pipelineBarrier(
    vk::PipelineStageFlagBits::eAllCommands, 
    vk::PipelineStageFlagBits::eAllCommands, ...);

    Some people use BOTTOM_OF_PIPE as source and TOP_OF_PIPE as the destination. It is not false, but it is useful only for execution barrier. These stages do not access memory, so they can’t make memory access visible or even available!!!! You should not (must not?) issue a memory barrier on these stages, but we are going to see that later.

    Now, we are going to see a better case
    Imagine your first command fills an image or one buffer (SSBO or imageStore) through the VERTEX_SHADER. Now imagine you want to use these data in EVALUATION_SHADER.
    The prior scheme, after modification, is :
    barriers in the good way

    As you can see, there is a lot of green stages and it is very good!
    The Vulkan C++ code should be:

    cmd.pipelineBarrier(
    vk::PipelineStageFlagBits::eVertexShader,
    vk::PipelineStageFlagBits::eTessellationEvaluationShader,...);

    By Region or not?

    This part may contain errors, so please, let me know if you disagree with me
    To begin, what does by region means?
    A region is a little part of your framebuffer. If you specify to use by region dependency, it means that (in fragment buffer space) operations need to be finished only in the region (that is specific to the implementation) and not in the whole image.
    Well, it is not clear what is a fragment buffer space. In my opinion, and after reading the documentation, it could be from the EARLY_TEST (or at least FRAGMENT_SHADER if early depth is not enabled) to the COLOR_ATTACHMENT.

    Actually, to me this flag lets the driver to optimize a bit. However, it must be used only (and should not be useful elsewhere IMHO) between subpasses for subpasses input attachments).
    But I may be wrong !

    Everything above about is wrong, if you want a plain explanation, see the comment from devsh. To make it simple, it means that the barrier will operate only on “one pixel” of the image. It could be used for input attachment or pre depth pass for example

    Memory Barriers

    Okay, now that we have seen how make a pure execution barrier (that means without memory barriers).
    Memory barriers ensure the availability for the first half memory dependency and the visibility for the second one. We can see them as a “flushing” and “invalidation”. Make information available does not mean that it is visible.
    In each kind of memory barrier you will have a srcAccessMask and a dstAccessMask.
    How do they work?

    Access and stage are somewhat coupled. For each stage of srcStage, all memory accesses using the set of access types defined in srcAccessMask will be made available. It can be seen as a flush of caches defined by srcAccessMask in all stages.

    For dstStage / dstAccess, it is the same thing, but instead to make information available, the information is made visible for these stages and these accesses.

    That’s why using BOTTOM/TOP_OF_PIPELINE is meaningless for memory barrier.

    For buffer and image barriers, you could as well perform a “releasing of ownership” from a queue to another of the resource you are using.
    An example, you transfer the image in your queue that is only used for transfers. At the end, you must perform a releasing from the transfer queue to the compute (or graphic) queue.

    Global Memory Barriers

    These kind of memory barriers applies to all memory objects that exist at the time of its execution.
    I do not have any example of when to use this kind of memory barrier. Maybe if you have a lot of barriers to do, it is better to use global memory barriers.
    An example:

    vk::MemoryBarrier(
    vk::AccessFlagBits::eMemoryWrite,
    vk::AccessFlagBits::eMemoryRead);

    Buffer Memory Barriers

    Here, accessesMask are valid only for the buffer we are working on through the barrier.
    Here is the example :

    vk::BufferMemoryBarrier(
    vk::AccessFlagBits::eTransferWrite,
    vk::AccessFlagBits::eShaderRead,
    transferFamillyIndex,
    queueFamillyIndex,
    0, VK_WHOLE_SIZE);

    Image Memory Barriers

    Image memory barriers have another kind of utility. They can perform layout transitions.

    Example:
    I want to create mipmaps associated to one image (we will see the complete function in another article) through vkCmdBlitImage.
    After a vkCmdBlitImage, I want use the mipmap I just wrote as a source for the next mipmap level.

    oldLayout must be DST_TRANSFER and newLayout must be SRC_TRANSFER.
    Which kind of access I made and which kind of access I will do?
    That is easy, I performed a TRANSFER_WRITE and I want to perform a TRANSFER_READ.
    At each stage my last command “finish” and at each stage my new command “begin”? Both in TRANSFER_STAGE.

    In C++ it is done by something like that:

    cmd.blitImage();
    vk::ImageMemoryBarrier imageBarrier(
    vk::AccessFlagBits::eTransferWrite,
    vk::AccessFlagBits::eTransferRead,
    vk::ImageLayout::eTransferDstOptimal,
    vk::ImageLayout::eTransferSrcOptimal,
    0, 0, image, subResourceRange);
    
    cmd.pipelineBarrier(
    vk::PipelineStageFlagBits::eTransfer,
    vk::PipelineStageFlagBits::eTransfer,
    vk::DependencyFlags(),
    nullptr, nullptr, imageBarrier);

    I hope that you enjoyed that article and that you have learned some things. Synchronization through Vulkan is not as easy to handle and all I wrote may (surely?) contains some errors.

    Reference:

    Memory barriers on TOP_OF_PIPE #128
    Specs

  • Vulkan Memory Management : How to write your own allocator

    Hi ! This article will deal with the memory management in Vulkan. But first, I am going to tell you what happened in my life.

    State of my life

    Again, it has been more than one month I did not write anything. So, where am I? I am in the last year of Télécom SudParis. I am following High Tech Imaging courses. It is the image specialization in my school. The funny part of it is : in parallel, I am a lecturer in a video games specialization. I taught OpenGL (3.3 because I cannot make an OpenGL 4 courses (everyone does not have a good hardware for that)). I got an internship in Dassault Systemes (France). It will begin the first February. I will work on the soft shadow engine (OpenGL 4.5).

    Vulkan

    To begin, some articles that I wrote before this one can contain mistakes, or some things are not well explained, or not very optimized.

    Why came back to Vulkan?

    I came back to Vulkan because I wanted to make one of the first “amateur” renderer using Vulkan. Also, I wanted to have a better improvement of memory management, memory barrier and other joys like that. Moreover, I made a repository with “a lot” of Vulkan Example : Vulkan example repository.
    I did not mean to replace the Sascha Willems ones. But I propose my way to do it, in C++, using Vulkan HPP.

    Memory Management with Vulkan

    Different kind of memory

    Heap

    One graphic card can read memory from different heap. It can read memory from its own heap, or the system heap (RAM).

    Type

    It exists a different kind of memory type. For example, it exists memories that are host cached, or host coherent, or device local and other.

    Host and device
    Host

    This memory resides in the RAM. This heap should have generally one (or several) type that own the bit “HOST_VISIBLE”. It means to Vulkan that it could be mapped persistently. Going that way, you get the pointer and you can write from the CPU on it.

    Device Local

    This memory resides on the graphic card. It is freaking fast and is not generally host_visible. That means you have to use a staging resource to write something to it or use the GPU itself.

    Allocation in Vulkan

    In Vulkan, the number of allocation per heap is driver limited. That means you can not do a lot of allocation and you must not use one allocation by buffer or image but one allocation for several buffers and images.
    In this article, I will not take care about the CPU cache or anything like that, I will only focus my explanations on how to have the better from the GPU-side.
    Memory Managements : good and bad

    How will we do it?

    Memory Managements : device allocator
    As you can see, we have a block, that could represent the memory for one buffer, or for one image, we have a chunk that represents one allocation (via vkAllocateMemory) and we have a DeviceAllocator that manages all chunks.

    Block

    I defined a block as follow :

    struct Block {
        vk::DeviceMemory memory;
        vk::DeviceSize offset;
        vk::DeviceSize size;
        bool free;
        void *ptr = nullptr; // Useless if it is a GPU allocation
    
        bool operator==(Block const &block);
    };
    bool Block::operator==(Block const &block) {
        if(memory == block.memory &&
           offset == block.offset &&
           size == block.size &&
           free == block.free &&
           ptr == block.ptr)
            return true;
        return false;
    }

    A block, as it is named, defines a little region within one allocation.
    So, it has an offset, one size, and a boolean to know if it is used or not.
    It may own a ptr if it is an

    Chunk

    A chunk is a memory region that contains a list of blocks. It represents a single allocation.
    What a chunk could let us to do?

    1. Allocate a block
    2. Deallocate a block
    3. Tell us if the block is inside the chunk

    That gives us:

    #pragma once
    #include "block.hpp"
    
    class Chunk : private NotCopyable {
    public:
        Chunk(Device &device, vk::DeviceSize size, int memoryTypeIndex);
    
        bool allocate(vk::DeviceSize size, Block &block);
        bool isIn(Block const &block) const;
        void deallocate(Block const &block);
        int memoryTypeIndex() const;
    
        ~Chunk();
    
    protected:
        Device mDevice;
        vk::DeviceMemory mMemory = VK_NULL_HANDLE;
        vk::DeviceSize mSize;
        int mMemoryTypeIndex;
        std::vector<Block> mBlocks;
        void *mPtr = nullptr;
    };

    One chunk allocates its memory inside the constructor.

    Chunk::Chunk(Device &device, vk::DeviceSize size, int memoryTypeIndex) :
        mDevice(device),
        mSize(size),
        mMemoryTypeIndex(memoryTypeIndex) {
        vk::MemoryAllocateInfo allocateInfo(size, memoryTypeIndex);
    
        Block block;
        block.free = true;
        block.offset = 0;
        block.size = size;
        mMemory = block.memory = device.allocateMemory(allocateInfo);
    
        if((device.getPhysicalDevice().getMemoryProperties().memoryTypes[memoryTypeIndex].propertyFlags & vk::MemoryPropertyFlagBits::eHostVisible) == vk::MemoryPropertyFlagBits::eHostVisible)
            mPtr = device.mapMemory(mMemory, 0, VK_WHOLE_SIZE);
    
        mBlocks.emplace_back(block);
    }

    Since a deallocation is really easy (only to put the block to free), one allocation requires a bit of attention. You need to check if the block is free, and if it is free, you need to check for its size, and, if necessary, create another block if the size of the allocation is less than the available size. You also need take care about memory alignment !

    void Chunk::deallocate(const Block &block) {
        auto blockIt(std::find(mBlocks.begin(), mBlocks.end(), block));
        assert(blockIt != mBlocks.end());
        // Just put the block to free
        blockIt->free = true;
    }
    
    bool Chunk::allocate(vk::DeviceSize size, vk::DeviceSize alignment, Block &block) {
        // if chunk is too small
        if(size > mSize)
            return false;
    
        for(uint32_t i = 0; i < mBlocks.size(); ++i) {
            if(mBlocks[i].free) {
                // Compute virtual size after taking care about offsetAlignment
                uint32_t newSize = mBlocks[i].size;
    
                if(mBlocks[i].offset % alignment != 0)
                    newSize -= alignment - mBlocks[i].offset % alignment;
    
                // If match
                if(newSize >= size) {
    
                    // We compute offset and size that care about alignment (for this Block)
                    mBlocks[i].size = newSize;
                    if(mBlocks[i].offset % alignment != 0)
                        mBlocks[i].offset += alignment - mBlocks[i].offset % alignment;
    
                    // Compute the ptr address
                    if(mPtr != nullptr)
                        mBlocks[i].ptr = (char*)mPtr + mBlocks[i].offset;
    
                    // if perfect match
                    if(mBlocks[i].size == size) {
                        mBlocks[i].free = false;
                        block = mBlocks[i];
                        return true;
                    }
    
                    Block nextBlock;
                    nextBlock.free = true;
                    nextBlock.offset = mBlocks[i].offset + size;
                    nextBlock.memory = mMemory;
                    nextBlock.size = mBlocks[i].size - size;
                    mBlocks.emplace_back(nextBlock); // We add the newBlock
    
                    mBlocks[i].size = size;
                    mBlocks[i].free = false;
    
                    block = mBlocks[i];
                    return true;
                }
            }
        }
    
        return false;
    }

    Chunk Allocator

    Maybe it is bad-named, but the chunk allocator let us to separate the creation of one chunk from the chunk itself. We give it one size and it operates all the verifications we need.

    class ChunkAllocator : private NotCopyable
    {
    public:
        ChunkAllocator(Device &device, vk::DeviceSize size);
    
        // if size > mSize, allocate to the next power of 2
        std::unique_ptr<Chunk> allocate(vk::DeviceSize size, int memoryTypeIndex);
    
    private:
        Device mDevice;
        vk::DeviceSize mSize;
    };
    
    vk::DeviceSize nextPowerOfTwo(vk::DeviceSize size) {
        vk::DeviceSize power = (vk::DeviceSize)std::log2l(size) + 1;
        return (vk::DeviceSize)1 << power;
    }
    
    bool isPowerOfTwo(vk::DeviceSize size) {
        vk::DeviceSize mask = 0;
        vk::DeviceSize power = (vk::DeviceSize)std::log2l(size);
    
        for(vk::DeviceSize i = 0; i < power; ++i)
            mask += (vk::DeviceSize)1 << i;
    
        return !(size & mask);
    }
    
    ChunkAllocator::ChunkAllocator(Device &device, vk::DeviceSize size) :
        mDevice(device),
        mSize(size) {
        assert(isPowerOfTwo(size));
    }
    
    std::unique_ptr<Chunk> ChunkAllocator::allocate(vk::DeviceSize size,
                                                    int memoryTypeIndex) {
        size = (size > mSize) ? nextPowerOfTwo(size) : mSize;
    
        return std::make_unique<Chunk>(mDevice, size, memoryTypeIndex);
    }

    Device Allocator

    I began to make an abstract class for Vulkan allocation :

    /**
     * @brief The AbstractAllocator Let the user to allocate or deallocate some blocks
     */
    class AbstractAllocator : private NotCopyable
    {
    public:
        AbstractAllocator(Device const &device) :
            mDevice(std::make_shared<Device>(device)) {
    
        }
    
        virtual Block allocate(vk::DeviceSize size, vk::DeviceSize alignment, int memoryTypeIndex) = 0;
        virtual void deallocate(Block &block) = 0;
    
        Device getDevice() const {
            return *mDevice;
        }
    
        virtual ~AbstractAllocator() = 0;
    
    protected:
        std::shared_ptr<Device> mDevice;
    };
    
    inline AbstractAllocator::~AbstractAllocator() {
    
    }
    

    As you noticed, it is really easy. You can allocate or deallocate from this allocator. Next, I created a DeviceAllocator that inherits from AbstractAllocator.

    class DeviceAllocator : public AbstractAllocator
    {
    public:
        DeviceAllocator(Device device, vk::DeviceSize size);
    
        Block allocate(vk::DeviceSize size, vk::DeviceSize alignment, int memoryTypeIndex);
        void deallocate(Block &block);
    
    
    private:
        ChunkAllocator mChunkAllocator;
        std::vector<std::shared_ptr<Chunk>> mChunks;
    };
    

    This allocator contains a list of chunks, and contains one ChunkAllocator to allocate chunks.
    The allocation is really easy. We have to check if it exists a “good chunk” and if we can allocate from it. Otherwise, we create another chunk and it is over !

    DeviceAllocator::DeviceAllocator(Device device, vk::DeviceSize size) :
        AbstractAllocator(device),
        mChunkAllocator(device, size) {
    
    }
    
    Block DeviceAllocator::allocate(vk::DeviceSize size, vk::DeviceSize alignment, int memoryTypeIndex) {
        Block block;
        // We search a "good" chunk
        for(auto &chunk : mChunks)
            if(chunk->memoryTypeIndex() == memoryTypeIndex)
                if(chunk->allocate(size, alignment, block))
                    return block;
    
        mChunks.emplace_back(mChunkAllocator.allocate(size, memoryTypeIndex));
        assert(mChunks.back()->allocate(size, alignment, block));
        return block;
    }
    
    void DeviceAllocator::deallocate(Block &block) {
        for(auto &chunk : mChunks) {
            if(chunk->isIn(block)) {
                chunk->deallocate(block);
                return ;
            }
        }
        assert(!"unable to deallocate the block");
    }
    

    Conclusion

    Since I came back to Vulkan, I really had a better understanding of this new API. I can write article in better quality than in march.
    I hope you enjoyed this remake of memory management.
    My next article will be about buffer, and staging resource. It will be a little article. I will write as well an article that explains how to load textures and their mipmaps.

    References

    Vulkan Memory Management

    Kisses and see you soon (probably this week !)