Tag: C++11

  • Lava erupting from Vulkan : Initialization or Hello World

    Hi there !
    A Few weeks ago, February 16th to be precise, Vulkan, the new graphic API from Khronos was released. It is a new API which gives much more control about the GPUs than OpenGL (API I loved before Vulkan ^_^).

    OpenGL’s problems

    Driver Overhead

    Fast rendering problems could be from the driver, video games don’t use perfectly the GPU (maybe 80% instead of 95-100% of use). Driver overheads have big costs and more recent OpenGL version tend to solve this problem with Bindless Textures, multi draws, direct state access, etc.
    Keep in mind that each GPU calls could have a big cost.
    Cass Everitt, Tim Foley, John McDonald, Graham Sellers presented Approaching Zero Driver Overhead with OpenGL in 2014.

    Multi threading

    With OpenGL, it is not possible to have an efficient multi threading, because an OpenGL context is for one and only one thread that is why it is not so easy to make a draw call from another thread ^_^.

    Vulkan

    Vulkan is not really a low level API, but it provides a far better abstraction for moderns hardwares. Vulkan is more than AZDO, it is, as Graham Sellers said, PDCTZO (Pretty Darn Close To Zero Overhead).

    Series of articles about Lava

    What is Lava ?

    Lava is the name I gave to my new graphic (physics?) engine. It will let me learn how Vulkan work, play with it, implement some global illumination algorithms, and probably share with you my learnings and feelings about Vulkan. It is possible that I’ll make some mistakes, so, If I do, please let me know !

    Why Lava ?

    Vulkan makes me think about Volcano that make me think about Lava, so… I chose it 😀 .

    Initialization

    Now begins what I wanted to discuss, initialization of Vulkan.
    First of all, you have to really know and understand what you will attend to do. For the beginning, we are going to see how to have a simple pink window.

    Hello world with Vulkan
    Hello world with Vulkan

    When you are developing with Vulkan, I advise you to have specifications from Khronos on another window (or screen if you are using multiple screens).
    To have an easier way to manage windows, I am using GLFW 3.2, and yes, you are mandatory to compile it yourself ^_^, but it is not difficult at all, so it is not a big deal.

    Instance

    Contrary to OpenGL, in Vulkan, there is no global state, an instance could be similar to an OpenGL Context. An instance doesn’t know anything about other instances, is utterly isolate. The creation of an instance is really easy.

    Instance::Instance(unsigned int nExtensions, const char * const *extensions) {
        VkInstanceCreateInfo info;
    
        info.sType = VK_STRUCTURE_TYPE_INSTANCE_CREATE_INFO;
        info.pNext = nullptr;
        info.flags = 0;
        info.pApplicationInfo = nullptr;
        info.enabledLayerCount = 0;
        info.ppEnabledLayerNames = nullptr;
        info.enabledExtensionCount = nExtensions;
        info.ppEnabledExtensionNames = extensions;
    
        vulkanCheckError(vkCreateInstance(&info, nullptr, &mInstance));
    }

    Physical devices, devices and queues

    From this Instance, you could retrieve all GPUs on your computer.
    You could create a connection between your application and the GPU you want using a VkDevice.
    Creating this connection, you have to create as well queues.
    Queues are used to perform tasks, you submit the task to a queue and it will be performed.
    The queues are separated between several families.
    A good way could be use several queues, for example, one for the physics and one for the graphics (or even 2 or three for this last).
    You could as well give a priority (between 0 and 1) to a queue. Thanks to that, if you consider a task not so important, you just have to give to the used queue a low priority :).

    Device::Device(const PhysicalDevices &physicalDevices, unsigned i, std::vector<float> const &priorities, unsigned nQueuePerFamily) {
        VkDeviceCreateInfo info;
        std::vector<VkDeviceQueueCreateInfo> infoQueue;
    
        mPhysicalDevice = physicalDevices[i];
    
        infoQueue.resize(physicalDevices.queueFamilyProperties(i).size());
    
        info.sType = VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO;
        info.pNext = nullptr;
        info.flags = 0;
        info.queueCreateInfoCount = infoQueue.size();
        info.pQueueCreateInfos = &infoQueue[0];
        info.enabledExtensionCount = info.enabledLayerCount = 0;
        info.pEnabledFeatures = &physicalDevices.features(i);
    
        for(auto j(0u); j < infoQueue.size(); ++j) {
            infoQueue[j].sType = VK_STRUCTURE_TYPE_DEVICE_QUEUE_CREATE_INFO;
            infoQueue[j].pNext = nullptr;
            infoQueue[j].flags = 0;
            infoQueue[j].pQueuePriorities = &priorities[j];
            infoQueue[j].queueCount = std::min(nQueuePerFamily, physicalDevices.queueFamilyProperties(i)[j].queueCount);
            infoQueue[j].queueFamilyIndex = j;
        }
    
        vulkanCheckError(vkCreateDevice(physicalDevices[i], &info, nullptr, &mDevice));
    }
    

    Image, ImageViews and FrameBuffers

    The images represent a mono or multi dimensional array (1D, 2D or 3D).
    The images don’t give any get or set for data. If you want to use them in your application, then you must use ImageViews.

    ImageViews are directly relied to an image. The creation of an ImageView is not really complicated.

    ImageView::ImageView(Device &device, Image image, VkFormat format, VkImageViewType viewType, VkImageSubresourceRange const &subResourceRange) :
        mDevice(device), mImage(image) {
        VkImageViewCreateInfo info;
    
        info.sType = VK_STRUCTURE_TYPE_IMAGE_VIEW_CREATE_INFO;
        info.pNext = nullptr;
        info.flags = 0;
        info.image = image;
        info.viewType = viewType;
        info.format = format;
        info.components.r = VK_COMPONENT_SWIZZLE_R;
        info.components.g = VK_COMPONENT_SWIZZLE_G;
        info.components.b = VK_COMPONENT_SWIZZLE_B;
        info.components.a = VK_COMPONENT_SWIZZLE_A;
        info.subresourceRange = subResourceRange;
    
        vulkanCheckError(vkCreateImageView(device, &info, nullptr, &mImageView));
    }

    You could write into ImageViews via FrameBuffers. A FrameBuffer owns multiple imageViews (attachments) and is used to write into them.

    FrameBuffer::FrameBuffer(Device &device, RenderPass &renderPass,
                             std::vector<ImageView> &&imageViews,
                             uint32_t width, uint32_t height, uint32_t layers)
        : mDevice(device), mRenderPass(renderPass),
          mImageViews(std::move(imageViews)),
          mWidth(width), mHeight(height), mLayers(layers){
        VkFramebufferCreateInfo info;
    
        std::vector<VkImageView> views(mImageViews.size());
    
        for(auto i(0u); i < views.size(); ++i)
            views[i] = mImageViews[i];
    
        info.sType = VK_STRUCTURE_TYPE_FRAMEBUFFER_CREATE_INFO;
        info.pNext = nullptr;
        info.flags = 0;
        info.renderPass = renderPass;
        info.attachmentCount = views.size();
        info.pAttachments = &views[0];
        info.width = width;
        info.height = height;
        info.layers = layers;
    
        vulkanCheckError(vkCreateFramebuffer(mDevice, &info, nullptr, &mFrameBuffer));
    }

    The way to render something

    A window is assigned to a Surface (VkSurfaceKHR). To draw something, you have to render into this surface via swapchains.

    From notions of Swapchains

    In Vulkan, you have to manage the double buffering by yourself via Swapchain. When you create a swapchain, you link it to a Surface and tell it how many images you need. For a double buffering, you need 2 images.

    Once the swapchain was created, you should retrieve images and create frame buffers using them.

    The steps to have a correct swapchain is :

    1. Create a Window
    2. Create a Surface assigned to this Window
    3. Create a Swapchain with several images assigned to this Surface
    4. Create FrameBuffers using all of these images.
    vulkanCheckError(glfwCreateWindowSurface(instance, mWindow, nullptr, &mSurface));
    
    void SurfaceWindow::createSwapchain() {
        VkSwapchainCreateInfoKHR info;
    
        uint32_t nFormat;
        vkGetPhysicalDeviceSurfaceFormatsKHR(mDevice, mSurface, &nFormat, nullptr);
        std::vector<VkSurfaceFormatKHR> formats(nFormat);
        vkGetPhysicalDeviceSurfaceFormatsKHR(mDevice, mSurface, &nFormat, &formats[0]);
    
        if(nFormat == 1 && formats[0].format == VK_FORMAT_UNDEFINED)
            formats[0].format = VK_FORMAT_B8G8R8A8_SRGB;
    
        mFormat = formats[0].format;
        mRenderPass = std::make_unique<RenderPass>(mDevice, mFormat);
    
        info.sType = VK_STRUCTURE_TYPE_SWAPCHAIN_CREATE_INFO_KHR;
        info.pNext = nullptr;
        info.flags = 0;
        info.imageFormat = formats[0].format;
        info.imageColorSpace = formats[0].colorSpace;
        info.imageUsage = VK_IMAGE_USAGE_COLOR_ATTACHMENT_BIT;
        info.imageSharingMode = VK_SHARING_MODE_EXCLUSIVE;
        info.preTransform = VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR;
        info.compositeAlpha = VK_COMPOSITE_ALPHA_INHERIT_BIT_KHR;
        info.presentMode = VK_PRESENT_MODE_MAILBOX_KHR;
        info.surface = mSurface;
        info.minImageCount = 2; // Double buffering...
        info.imageExtent.width = mWidth;
        info.imageExtent.height = mHeight;
    
        vulkanCheckError(vkCreateSwapchainKHR(mDevice, &info, nullptr, &mSwapchain));
        initFrameBuffers();
    }
    void SurfaceWindow::initFrameBuffers() {
        VkImage images[2];
        uint32_t nImg = 2;
    
        vkGetSwapchainImagesKHR(mDevice, mSwapchain, &nImg, images);
    
        for(auto i(0u); i < nImg; ++i) {
            std::vector<ImageView> allViews;
            allViews.emplace_back(mDevice, images[i], mFormat);
            mFrameBuffers[i] = std::make_unique<FrameBuffer>(mDevice, *mRenderPass, std::move(allViews), mWidth, mHeight, 1);
        }
    }

    Using swapchain is not difficult.

    1. Acquire the new image index
    2. Present queue
    void SurfaceWindow::begin() {
        // No checking because could be in lost state if change res
        vkAcquireNextImageKHR(mDevice, mSwapchain, UINT64_MAX, VK_NULL_HANDLE, VK_NULL_HANDLE, &mCurrentSwapImage);
    }
    
    void SurfaceWindow::end(Queue &queue) {
        VkPresentInfoKHR info;
    
        info.sType = VK_STRUCTURE_TYPE_PRESENT_INFO_KHR;
        info.pNext = nullptr;
        info.waitSemaphoreCount = 0;
        info.pWaitSemaphores = nullptr;
        info.swapchainCount = 1;
        info.pSwapchains = &mSwapchain;
        info.pImageIndices = &mCurrentSwapImage;
        info.pResults = nullptr;
    
        vkQueuePresentKHR(queue, &info);
    }

    To notions of Render Pass

    Right now, Vulkan should be initialized. To render something, we have to use render pass, and command buffer.

    Command Buffers

    Command buffer is quite similar to vertex array object (VAO) or display list (old old old OpenGL 😀 ).
    You begin the recorded state, you record some “information” and you end the recorded state.
    Command buffers are allocated from the CommandPool.

    Vulkan provides two types of Command Buffer.

    1. Primary level : They should be submitted within a queue.
    2. Secondary level : They should be executed by a primary level command buffer.
    std::size_t CommandPool::allocateCommandBuffer() {
        VkCommandBuffer cmd;
        VkCommandBufferAllocateInfo info;
    
        info.sType = VK_STRUCTURE_TYPE_COMMAND_BUFFER_ALLOCATE_INFO;
        info.pNext = nullptr;
        info.commandPool = mCommandPool;
        info.commandBufferCount = 1;
        info.level = VK_COMMAND_BUFFER_LEVEL_PRIMARY;
    
        vulkanCheckError(vkAllocateCommandBuffers(mDevice, &info, &cmd));
    
        mCommandBuffers.emplace_back(cmd);
        return mCommandBuffers.size() - 1;
    }

    Renderpass

    One render pass is executed on one framebuffer. The creation is not easy at all. One render pass is componed with one or several subpasses.
    I remind that framebuffers could have several attachments.
    Each attachment are not mandatory to be used for all subpasses.

    This piece of code to create one renderpass is not definitive at all and will be changed as soon as possible ^^. But for our example, it is correct.

    RenderPass::RenderPass(Device &device, VkFormat format) :
        mDevice(device)
    {
        VkRenderPassCreateInfo info;
        VkAttachmentDescription attachmentDescription;
        VkSubpassDescription subpassDescription;
        VkAttachmentReference attachmentReference;
    
        attachmentReference.attachment = 0;
        attachmentReference.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
    
        attachmentDescription.flags = VK_ATTACHMENT_DESCRIPTION_MAY_ALIAS_BIT;
        attachmentDescription.format = format;
        attachmentDescription.samples = VK_SAMPLE_COUNT_1_BIT;
        attachmentDescription.loadOp = VK_ATTACHMENT_LOAD_OP_CLEAR;
        attachmentDescription.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
        attachmentDescription.stencilLoadOp = VK_ATTACHMENT_LOAD_OP_CLEAR;
        attachmentDescription.stencilStoreOp = VK_ATTACHMENT_STORE_OP_STORE;
        attachmentDescription.initialLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
        attachmentDescription.finalLayout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
    
        subpassDescription.flags = 0;
        subpassDescription.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
        subpassDescription.inputAttachmentCount = 0;
        subpassDescription.colorAttachmentCount = 1;
        subpassDescription.pColorAttachments = &attachmentReference;
        subpassDescription.pResolveAttachments = nullptr;
        subpassDescription.pDepthStencilAttachment = nullptr;
        subpassDescription.preserveAttachmentCount = 0;
        subpassDescription.pPreserveAttachments = nullptr;
    
        info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
        info.pNext = nullptr;
        info.flags = 0;
        info.attachmentCount = 1;
        info.pAttachments = &attachmentDescription;
        info.subpassCount = 1;
        info.pSubpasses = &subpassDescription;
        info.dependencyCount = 0;
        info.pDependencies = nullptr;
    
        vulkanCheckError(vkCreateRenderPass(mDevice, &info, nullptr, &mRenderPass));
    }

    In the same way as for command buffer, render pass should be began and ended!

    void CommandPool::beginRenderPass(std::size_t index,
                                      FrameBuffer &frameBuffer,
                                      const std::vector<VkClearValue> &clearValues) {
        assert(index < mCommandBuffers.size());
        VkRenderPassBeginInfo info;
        VkRect2D area;
    
        area.offset = VkOffset2D{0, 0};
        area.extent = VkExtent2D{frameBuffer.width(), frameBuffer.height()};
    
        info.sType = VK_STRUCTURE_TYPE_RENDER_PASS_BEGIN_INFO;
        info.pNext = nullptr;
        info.renderPass = frameBuffer.renderPass();
        info.framebuffer = frameBuffer;
        info.renderArea = area;
        info.clearValueCount = clearValues.size();
        info.pClearValues = &clearValues[0];
    
        vkCmdBeginRenderPass(mCommandBuffers[index], &info, VK_SUBPASS_CONTENTS_INLINE);
    }
    

    Our engine in action

    Actually, our “engine” is not really usable ^^.
    But in the future, command pool, render pass should don’t appear in the user files !

    #include "System/contextinitializer.hpp"
    #include "System/Vulkan/instance.hpp"
    #include "System/Vulkan/physicaldevices.hpp"
    #include "System/Vulkan/device.hpp"
    #include "System/Vulkan/queue.hpp"
    #include "System/surfacewindow.hpp"
    #include "System/Vulkan/exception.hpp"
    #include "System/Vulkan/commandpool.hpp"
    #include "System/Vulkan/fence.hpp"
    
    void init(CommandPool &commandPool, SurfaceWindow &window) {
        commandPool.reset();
    
        VkClearValue value;
        value.color.float32[0] = 0.8;
        value.color.float32[1] = 0.2;
        value.color.float32[2] = 0.2;
        value.color.float32[3] = 1;
    
        for(int i = 0; i < 2; ++i) {
            commandPool.allocateCommandBuffer();
            commandPool.beginCommandBuffer(i);
            commandPool.beginRenderPass(i, window.frameBuffer(i), {value});
            commandPool.endRenderPass(i);
            commandPool.endCommandBuffer(i);
        }
        commandPool.allocateCommandBuffer();
    }
    
    void mainLoop(SurfaceWindow &window, Device &device, Queue &queue) {
        Fence fence(device, 1);
        CommandPool commandPool(device, 0);
    
        while(window.isRunning()) {
            window.updateEvent();
            if(window.neetToInit()) {
                init(commandPool, window);
                std::cout << "Initialisation" << std::endl;
                window.initDone();
            }
            window.begin();
            queue.submit(commandPool.commandBuffer(window.currentSwapImage()), 1, *fence.fence(0));
            fence.wait();
            window.end(queue);
        }
    }
    
    int main()
    {
        ContextInitializer context;
        Instance instance(context.extensionNumber(), context.extensions());
        PhysicalDevices physicalDevices(instance);
        Device device(physicalDevices, 0, {1.f}, 1);
        Queue queue(device, 0, 0);
    
        SurfaceWindow window(instance, device, 800, 600, "Lava");
    
        mainLoop(window, device, queue);
    
        glfwTerminate();
    
        return 0;
    }

    If you want the whole source code :
    GitHub

    Reference

    Approaching Zero Driver Overhead :Lecture
    Approaching Zero Driver Overhead : Slides
    Vulkan Overview 2015
    Vulkan in 30 minutes
    VkCube
    GLFW with Vulkan

  • Flux: Qt Quick with unidirectional data flow

    Hello !

    Today is an important day, is the day of the first article of Qt I wrote.

    Have you ever heard anything about Model View Controller or Model View Delegate???? Yes obviously, but right now, we’re going to talk about another thing (yes I am funny I know) . We are going to talk about Facebook, I mean a pattern which comes from Facebook.

    What are we going to talk about??

    We are going to talk about the Flux pattern, this pattern says data flow should be unidirectional as opposed to the Model View Delegate pattern.

    modelview-overviewModel View Delegate multi directional data flow : Credit Qt

    Flux pattern representation

    Flux unidirectional data flow

    What is the advantages to use Flux pattern?

    1. Signals propagation is easy and do not require any copy and paste.
    2. The code is easy to read.
    3. There is low coupling

    What is the Action Creator?

    When a user wants to interact with the application, he want to do an “Action“. For example, he could wants to add things to a todo list, so he could launch an Action(“Things to do”);
    The Action Creator is here to give Action to our dispatcher.

    What is the Dispatcher?

    A dispatcher takes an action and its arguments and dispatchs it through all stores.

    What is the Store?

    A store is like a collection of datas but it also has logic buried inside it.

    What is the View?

    It shows all data.

    What are we going to see?

    We are going to see how to write a little application using Flux.
    Our application could seem to that :
    Screenshot_2016-02-19-20-22-20

    Let’s code !

    First, we need to know what our application has to do.
    It must have possibility to add a counter, increment and decrement them. We exactly have 3 actions. It is that simple !

    pragma Singleton
    import QtQuick 2.0
    
    Item {
        property string add: "add";
        property string inc: "inc";
        property string dec: "dec";
    }
    pragma Singleton
    import QtQuick 2.0
    
    Item {
        function add() {AppDispatcher.dispatch("add", {});}
        function inc(id) {AppDispatcher.dispatch("inc", {id:id});}
        function dec(id) {AppDispatcher.dispatch("dec", {id:id});}
    }
    

    Now, we are going to see what is the AppDispatcher.

    A dispatcher should take as argument the action type and a message (id for example). This dispatcher should dispatch this action as well.

    class Dispatcher : public QObject {
        Q_OBJECT
    public:
        Dispatcher() = default;
    
    public slots:
        Q_INVOKABLE void dispatch(QString action, QJSValue args) {
            emit dispatched(action, args);
        }
    
    signals:
        void dispatched(QString action, QJSValue args);
    };
    #include <QApplication>
    #include <QQmlApplicationEngine>
    #include <QQmlContext>
    #include "dispatcher.h"
    
    int main(int argc, char *argv[])
    {
        QApplication app(argc, argv);
        Dispatcher dispatcher;
    
        QQmlApplicationEngine engine;
    
        engine.rootContext()->setContextProperty("AppDispatcher", QVariant::fromValue(&dispatcher));
    
        engine.load(QUrl(QStringLiteral("qrc:/main.qml")));
    
        return app.exec();
    }
    

    You easily could improve the dispatch behaviour. Indeed, it could be more safe to use a queue… But in this example, I just show the mechanism and try to don’t overcomplicate the app.

    I remind dispatcher dispatchs through the stores.
    A store is a singleton which manages all objects of the same type. Our store manages all counters in the app.

    A counter is an object with an id and a value, knowing that, we easily get :

    pragma Singleton
    import QtQuick 2.0
    import "."
    
    Item {
        property alias model: listModel;
        property int nextId: 1;
    
        ListModel {
            id: listModel;
    
            ListElement {
                idModel: 0;
                value: 0;
            }
        }
    
        function getItemID(idModel) {
            for(var i = 0; i < model.count; ++i) {
                if(model.get(i).idModel == idModel)
                    return model.get(i);
            }
        }
    
        Connections {
            target: AppDispatcher;
    
            onDispatched: {
                if(action === ActionType.add)
                    model.append({idModel: nextId++, value:0});
    
                else if(action === ActionType.inc)
                    getItemID(args.id).value++;
    
                else if(action === ActionType.dec)
                    getItemID(args.id).value--;
            }
        }
    }
    

    On the onDispatched check if it is the good action, and if it is, do the required work.

    Now we just need a view, as you see before, we use a “model” to store  all data, it will be the same for the view / delegate, but even if we use model view delegate, we will keep a unidirectional data flow.

    The delegate will explains how an item should be rendered.
    It is componed by 2 buttons (+ and -) and a value :

    import QtQuick 2.0
    import QtQuick.Controls 1.4
    
    import "."
    
    Rectangle {
        property alias text: t.text;
        property int fontSize: 10;
        signal clicked;
    
    
        MouseArea {
            anchors.fill: parent;
            onClicked: parent.clicked();
        }
    
        Text {
            anchors.centerIn: parent;
            font.pointSize: fontSize;
            id:t;
        }
    }
    
    
    import QtQuick 2.0
    import "."
    
    Rectangle{
        width: text.width;
        height: text.height;
        color: palette.window;
    
        Text {
            id: text;
            text:value;
            font.pointSize: mainWindow.width < mainWindow.height ? mainWindow.width / 16: mainWindow.height / 16;
        }
    
        Button {
            anchors.left: text.right;
            anchors.verticalCenter: parent.verticalCenter;
    
            color: Qt.rgba(0.4, 0.7, 0.2, 1);
            width: mainWindow.width / 10;
            height: mainWindow.height / 10;
            text: "+";
            fontSize: mainWindow.width < mainWindow.height ? mainWindow.width / 20 : mainWindow.height / 20;
    
            onClicked: ActionCreator.inc(idModel);
        }
    
        Button {
            anchors.right: text.left;
            anchors.verticalCenter: parent.verticalCenter;
    
            color: Qt.rgba(0.7, 0.4, 0.2, 1);
            width: mainWindow.width / 10;
            height: mainWindow.height / 10;
            text: "-";
            fontSize: mainWindow.width < mainWindow.height ? mainWindow.width / 20 : mainWindow.height / 20;
    
            onClicked: ActionCreator.dec(idModel);
        }
    }
    

    Yeah I know, there is some duplication of code, it is not good…
    Now, we have the possibility to render items, we should print many of them.
    Flux tells us datas are coming from Store, so let’s implement what Flux says!

    import QtQuick 2.0
    import QtQuick.Controls 1.4
    import "."
    
    Rectangle {
        width: view.contentItem.childrenRect.width;
        height: view.contentItem.childrenRect.height;
    
        color: palette.window;
    
        ListView {
            id: view;
            anchors.fill: parent;
            model: CounterStore.model;
    
            spacing: 10;
    
            delegate: CounterItem{}
        }
    }
    
    import QtQuick 2.5
    import QtQuick.Controls 1.4
    import "."
    
    ApplicationWindow {
        id: mainWindow;
        visible: true
        width: 640
        height: 480
        title: qsTr("Hello World")
    
        SystemPalette {
            id: palette;
        }
    
    
        Button {
            id: buttonAdd;
            anchors.verticalCenter: parent.verticalCenter;
            width: parent.width / 5;
            height: parent.height;
            text: "Add";
            fontSize: 30;
            color: Qt.rgba(0.1, 0.3, 0.7, 1.0);
    
            onClicked: AppDispatcher.dispatch("add", {});
        }
    
        Rectangle {
            color: palette.window;
            anchors.right: parent.right;
            anchors.left: buttonAdd.right;
            anchors.top: parent.top;
            anchors.bottom: parent.bottom;
    
            CounterView {
                anchors.centerIn: parent;
            }
        }
    }
    

    It is the end, if you have any questions, please, let me know !
    Hope you enjoyed it and learned somethings !

    References

    Flux by Facebook
    Quick Flux : Problems about MVC and introduction

    Thanks !

  • How to make a Photon Mapper : Photons everywhere

    Bleed Color and Caustics.
    Bleed Color and Caustics.

    Hello,

    It has been a long time since I posted anything on this blog, I am so sorry about it.
    I will try to diversify my blog, I’ll talk about C++, Rendering (always <3), and Qt, a framework I love.

    So, in this last part, we will talk about photons.

    What exactly are Photons ?

    A photon is a quantum of light. It carries the light straightforwardly on the medium. Thanks to it, we can see objects etc.

    How could we transform photons in a “visible value” like RGB color ?

    We saw that the eyes only “see” the radiance !
    So we have to transform our photons in radiance.

    From physic of photons

    We know that one photon have for energy :

    \displaystyle{}E_{\lambda}={h\nu}=\frac{hc}{\lambda}

    where \lambda is the wavelength in nm and E in Joules.
    Say we have n_\lambda photons of E_{\lambda} each.
    We can lay the luminous energy right now :

    \displaystyle{Q_{\lambda}=n_{\lambda}E{\lambda}}

    The luminous flux is just the temporal derivative about the luminous energy :

    \displaystyle{\phi_{\lambda}=\frac{dQ_{\lambda}}{dt}}

    The idea is great, but we have a luminous flux function of the wavelength, but the radiance is waiting for a general luminous flux
    So, we want to have a general flux which is the integral over all wavelengths in the visible spectrum of the \lambda luminous flux.

    \displaystyle{\phi=\int_{380}^{750}d\phi_{\lambda}=\int_{380}^{750}\frac{\partial \phi_{\lambda}}{\partial\lambda}d\lambda}

    Now, we have the radiance

    \displaystyle{L=\frac{d^2 \phi}{cos \theta dAd\omega}=\frac{d^2(\int_{380}^{750}\frac{\partial \phi_{\lambda}}{\partial \lambda}d\lambda)}{cos(\theta)dAd\omega}=\int_{380}^{750}\frac{d^3\phi_{\lambda}}{cos(\theta)dAd\omega d\lambda}d\lambda}

    Using the rendering equation, we get two forms :

    \displaystyle{L^O=\int_{380}^{750}\int_{\Omega^+}fr(\mathbf{x}, \omega_i,\omega_o,\lambda)\frac{d^3\phi_{\lambda}}{dAd\lambda}d\lambda}
    \displaystyle{\int_{\Omega^+}fr(\mathbf{x}, \omega_i,\omega_o)\frac{d^2\phi}{dA}}

    The first one take care about dispersion since the second doesn’t.
    In this post, I am not going to use the first one, but I could write an article about it latter.

    Let’s make our Photon Mapper

    What do we need ?

    We need a Light which emits photons, so we could add a function “emitPhotons” .

    /**
     * @brief      Interface for a light
     */
    class AbstractLight {
    public:
        AbstractLight(glm::vec3 const &flux);
    
        virtual glm::vec3 getIrradiance(glm::vec3 const &position, glm::vec3 const &normal) = 0;
    
        virtual void emitPhotons(std::size_t number) = 0;
    
        virtual ~AbstractLight() = default;
    
    protected:
        glm::vec3 mTotalFlux;
    };

    We also need a material which bounces photons :

    class AbstractMaterial {
    public:
        AbstractMaterial(float albedo);
        
        virtual glm::vec3 getReflectedRadiance(Ray const &ray, AbstractShape const &shape) = 0;
        
        virtual void bouncePhoton(Photon const &photon, AbstractShape const &shape) = 0;
        virtual ~AbstractMaterial() = default;
        
        float albedo;
    protected:
        virtual float brdf(glm::vec3 const &ingoing, glm::vec3 const &outgoing, glm::vec3 const &normal) = 0;
    };

    Obviously, we also need a structure for our photons. This structure should be able to store photons and compute irradiance at a given position.

    class AbstractPhotonMap {
    public:
        AbstractPhotonMap() = default;
    
        virtual glm::vec3 gatherIrradiance(glm::vec3 position, glm::vec3 normal, float radius) = 0;
        virtual void addPhoton(Photon const &photon) = 0;
        virtual void clear() = 0;
    
        virtual ~AbstractPhotonMap() = default;
    private:
    };

    How could we do this ?

    Photon emitting is really easy :

    void SpotLight::emitPhotons(std::size_t number) {
        Photon photon;
    
        photon.flux = mTotalFlux / (float)number;
        photon.position = mPosition;
    
        for(auto i(0u); i < number; ++i) {
            vec3 directionPhoton;
            do
                directionPhoton = Random::random.getSphereDirection();
            while(dot(directionPhoton, mDirection) < mCosCutoff);
    
            photon.direction = directionPhoton;
            tracePhoton(photon);
        }
    }

    We divide the total flux by the number of photons and we compute a random direction, then we could trace the photon

    Bouncing a photon depends on your material :

    void UniformLambertianMaterial::bouncePhoton(const Photon &_photon, const AbstractShape &shape) {
        Photon photon = _photon;
    
        float xi = Random::random.xi();
        float d = brdf(vec3(), vec3(), vec3());
    
        if(photon.recursionDeep > 0) {
            // Photon is absorbed
            if(xi > d) {
                World::world.addPhoton(_photon);
                return;
            }
        }
    
        if(++photon.recursionDeep > MAX_BOUNCES)
            return;
    
        photon.flux *= color;
        photon.direction = Random::random.getHemisphereDirection(shape.getNormal(photon.position));
        tracePhoton(photon);
    }

    To take care about conservation of energy, we play Russian roulette.
    Obviously, to take care about conservation of energy, we have to modify the direct lighting as well ^^.

    vec3 UniformLambertianMaterial::getReflectedRadiance(Ray const &ray, AbstractShape const &shape) {
        vec3 directLighting = getIrradianceFromDirectLighting(ray.origin, shape.getNormal(ray.origin));
        float f = brdf(vec3(), vec3(), vec3());
    
        return color * (1.f - f) * f * (directLighting + World::world.gatherIrradiance(ray.origin, shape.getNormal(ray.origin), 0.5f));
    }

    Finally, we need to compute the irradiance at a given position :
    It is only :

    \displaystyle{E=\sum \frac {\phi}{\pi r^2}}

    So we could easily write :

    vec3 SimplePhotonMap::gatherIrradiance(glm::vec3 position, glm::vec3 normal, float radius) {
        float radiusSquare = radius * radius;
        vec3 irradiance;
        for(auto &photon : mPhotons)
            if(dot(photon.position - position, photon.position - position) < radiusSquare)
                if(dot(photon.direction, normal) < 0.0)
                    irradiance += photon.flux;
    
        return irradiance / ((float)M_PI * radiusSquare);
    }

    To have shadows, you could emit shadow photons like this :

    void traceShadowPhoton(const Photon &_photon) {
        Photon photon = _photon;
        Ray ray(photon.position + photon.direction * RAY_EPSILON, photon.direction);
    
        photon.flux = -photon.flux;
    
        auto nearest = World::world.findNearest(ray);
    
        while(get<0>(nearest) != nullptr) {
            ray.origin += ray.direction * get<1>(nearest);
            photon.position = ray.origin;
    
            if(dot(ray.direction, get<0>(nearest)->getNormal(ray.origin)) < 0.f)
                World::world.addPhoton(photon);
    
            ray.origin += RAY_EPSILON * ray.direction;
    
            nearest = World::world.findNearest(ray);
        }
    }

    That’s all ! If you have any question, please, let me know !