

Graphics Projects
Synopsis
This is a separate list of all my individual graphics work. These are primarily treated as learning opportunities and, most importantly, for fun.
I used DirectX11 and 12, DirectX Raytracing, OpenGL, and even the PlayStation 5 SDK in these projects. Enjoy the eye candy!
My Roles
Graphics Programming
Technical Art
Team Size
1
Tools
D3D11, D3D12, DirectX Raytracing, HLSL, GLSL, C++, OpenGL, PS5 SDK, RenderDoc, RazorGPU
Portal Renderer (DX11)
Development Time: Apr - May 2023
​
This real time renderer built with DX11 was initially made for a class, featuring shadows, physically-based rendering, and procedural terrain generation via Perlin Noise.
However, I took it beyond the class as I was extremely interested in how to render portals. I implemented portals can be seen through and moved around in real time. These portals can also render each other recursively so that portals can be seen through other portals.
You can view the code and a ReadMe file with more in-depth documentation on my approach below!
​
(Side note: My other project, Duolatera, is the successor to this project as I built fully interactable portals for VR in Unreal 5. Check that out if you haven't already!)



Volumetric Light Rays (DX11)
Development Time: Apr - May 2025
​
Volumetric lighting is a huge part of what makes games so beautiful, in my opinion. I wanted to try my hand at this and made movable and rotatable directional and point lights that emit light rays in a basic DX11 renderer.​
​
This is an entirely post-process effect. I start by creating a skybox texture where I render the location of each light source in my scene onto a cube map, rendering that pixel with its light color and giving surrounding pixels a bit of falloff. Because this is treated as a skybox, parts of it will be occluded, or blocked, by other objects.
​
​Next, I use this cube map in my Light Ray post process shader. I implemented a Ray Marching algorithm that calculates a 2D ray starting at the pixel being rendered, directed towards the light's position in Screen Space. For each pixel the ray passes through, I sample the cube map's color and add a percentage of it to my result.
​
This came out beautifully, though a future goal would be to figure out how to render multiple volumetric lights at the same time. My current approach causes the scene to be far too bright because all lights are rendered to the same cube map. Ray Marching will accumulate color from other lights as it marches toward each light, when it should only accumulate color from the light it marches towards.
I could potentially have a unique cube map per light, but imagine there being 18 light sources... I'd run out of memory pretty fast. I'll have to think about it more. Regardless, feel free to check out the code below!
Real-time Pathtracer (DX12 & DXR)
Development Time: Mar 2025
​
​This path tracer was built with D3D12 and the DX Raytracing API. In this program, the user can move and look around with WASD and the mouse, as well as customize various path tracing parameters such as rays per pixel and max bounces.
​
​






Radiosity (OpenGL)
Development Time: Mar - May 2024
​
Baked lighting really brings environments with static lighting to life. I implemented baked lighting via Radiosity using the classic Cornell Box scene.
​
The bake process takes a few minutes, but once it's done, the user can look and move around the scene freely to examine the result in real time.​
​
To achieve this, all surfaces in the scene are subdivided into small quads called patches. For each patch, I calculate the sum of how much light all other patches contribute to it and add that sum to its own light emission & direct light reflection. I do this for all patches several times, where each time represents the light "bouncing" between surfaces.
The most common approach to calculating how much light bounces from patch A to patch B is to project patch A onto a hemicube surrounding patch B, divided into a bunch of pixels. The result is the percentage of how many of those pixels are covered by the projection (called a "form factor").
​
This may sound scary, but in practice I just need to render the scene from the point of view of the patch I am calculating to a cube map, then sample that cube map to determine how many pixels each rendered patch is covering. Add all those form factors to the color of the patch being calculated for the new patch color. Do this for all patches enough times and you get baked diffuse lighting.
Unfortunately, my approach looks a bit blocky because I did not interpolate the color between patches, which would have effectively smoothed it out. It's a stretch goal I didn't have time for, but this is what I would have done to fix this issue.
Screen-Space Reflections (OpenGL)
Development Time: Apr - May 2024
One of my favorite post-process effects in games, I implemented Screen-Space Reflections (SSR) in my OpenGL engine and demonstrated it using a scene I imported using Assimp, a free open-source asset importer library.
​
SSR is a post-process effect that computes surface reflections in Screen Space. To do this, I first needed to output a base color texture, normal texture, and depth texture from my initial render pass that represent the initial color of my scene, view-space normals, and scene depth, respectively.
​​
The next step was using these textures in my post-process pass. Like with my volumetric lights project, I used Ray Marching starting at the pixel being rendered, with the direction being that pixel's view-space position vector reflected over its normal as sampled from the normal map. I then test each pixel the ray passes through in Screen Space to see if the ray hit an object at that pixel.
A hit is true if the depth at the current pixel being tested, as calculated from the ray start and direction, roughly matches its depth as sampled from the depth texture. If it did, then the tested pixel's color, sampled from the base color texture, gets added to the starting pixel's color. ​​
​
Throughout the project I encountered issues with my screen either rendering completely black or just wrong in weird ways. I made use of RenderDoc to capture frames and examine what my buffers were looking like under the hood and what my shaders were actually outputting. It turned out that my issues were frequently related to incorrect buffer data formats, interestingly enough. Either way, problems were solved without much headache.
The result is a very convincing reflection of the trees on the pond in my scene. Because this is screen-space, reflections disappear when Ray Marching goes off-screen, but it's still a great solution most certainly more performant than real ray tracing.


Pond Scene - No SSR

Pond Scene - With SSR


3D Terrain Manipulator (PS5 SDK)
Development Time: Nov 2024
This program was written with the PS5 SDK for a console development class. You can fly and look around, and you can aim at a flat plane to dig into it or extrude it outwards, allowing you to build things like caves and overhangs.
​
To do this, I implemented a version of the classic Marching Cubes algorithm which is used to generate the 3D mesh representing my terrain. Essentially how this works is that you start with a 3D grid of 4D vectors where the X, Y, and Z components are the vector's location in 3D space, and the W component is what's called a threshold value. Then, you "march" along each cube of 8 vertices in this grid to determine where to generate triangles. For each vertex in a cube, if its threshold value is below a certain Surface Level (which I set to 0.5), it is inside the terrain, and outside otherwise.
Triangles are then placed to divide points outside and inside the terrain, which become the physical surface of the mesh being generated by this grid.
​
The final stage of this process involved implementing a Raycasting function to determine where on the terrain my camera is aimed at, then implementing input functions to the controller triggers. Holding Right Trigger increases the threshold value of all grid points within a radius of the raycast impact point, and holding Left Trigger decreases them. Then I regenerate the terrain with the new threshold values.
As I was relatively new to the PS5 SDK at the time of building this, I naturally encountered errors with the rendering stage of my program. I utilized RazorGPU to debug issues with how I was sending my Marching Cubes terrain vertices to the GPU, as well as analyze problems with my shaders (though there weren't many of those).