• Introduction

Metaballs! Yeah, like meatballs, but no meat.

Before jumping into that, I recall, years ago, when a colleague at the office gave a small talk about Fragment Shaders.
At that time, I thought I’d be better off deciphering the Necronomicon (including searching for it in the local library).

Today, my almost absurd and preferred simplification to explain the concept is that it’s just a function that returns 4 values, which turn out to be the color of the pixel to be drawn (Red, Green, Blue, and Alpha).

Of course, this is a partial simplification of something more complex than that, but why scare innocent people too early?
All this fits within the world of ‘how to draw beautiful things on the screen’.
And this post is precisely about that.

Well, maybe not so beautiful things, but some basic attempts.

• Metaballs: What are they?

AI lavalamp effect

This brings us back to Metaballs: What are they? The simplest and most direct way is: ‘do you remember those lava lamps?’

Those viscous balls that would join and then separate in a somewhat animal-like way?

Well, Metaballs are about that effect, and this article, like many others, is about how to achieve it.

• How to ‘achieve’ the effect

Simplifying, I’ve seen two approaches out there on how to do it.
The mathematical way, and the slightly trickier way.

Math Lady meme

Math what?

The first approach, more elegant, follows a technique called “Marching Cubes”.
It was published in 1987 in a paper called “Marching Cubes: A High Resolution 3D Surface Construction Algorithm”.

In a simplified view, it uses information from circumferences and the vertices of a grid of cubes in space to draw.
Draw what? Well, whatever needs to be drawn.

Marching Squares Image

Marching Squares (2D version)

I know that it sounds strange, but let’s say on one side we have a grid with sensors at each vertex, and on the other side our balls.
(Plus, a ball can be over multiple sensors).

Whenever a ball is over a sensor, the sensor will be active, then, taking a set of sensors, we will draw lines according to which sensors are active in that set. And so on until we complete our entire grid.

Easier to say than done?
This is an excellent resource to get a grasp of it
(this is the 2D version, squares instead of cubes)

In the second approach, slightly more ingenious tricks are used to calculate or compose the effect of ‘closeness’ or ‘influence’, and according to this, we draw!

Specifically, we can use elements that have an opaque color with a semi-transparent gradient border.
Now we can use the alpha values (which indicate how opaque or transparent the color is) to achieve the the final effect.
So, in a post-drawn process, we define some alpha thresholds to draw our Metaballs.
We can define more than one level, so we can have one color for the body and another for the border.

In the case of a single element, there’s nothing very interesting to see.
But when two bodies get close, their respective semi-transparent borders will join forces to ‘saturate’ the alpha channel until they reach the threshold that indicates that we have to draw something on the screen.

In summary;

  • Marching cubes: 1987 paper called ‘Marching Cubes: A High Resolution 3D Surface Construction Algorithm’
  • Calculate or draw closeness/influence, paint body and contours according to it

• Some implementations / Tutorials

In the first approach:

In the second category, I share with you these four implementations;

• So, why another one?

With Marching, there are some limitations, you can’t just put different colors on the edges, just ‘contours’ (Although with some work I think it could be done, for instance; having a second system that would draw a second pass with a more adjusted ‘contour’ level?).

On the others, I would like to be able to avoid blur or to have to pass information about the location of each metaball on each frame.

And that’s the reason for this post; here I’m not coming to sell but to give away.

My idea is to remove the blur, using the alpha trick (which we will achieve with a shader) and then use a render pass with the coloring rules using another shader, similar to the last 2 posts, but this time we will use RenderGraph, which according to the Unity people, is what’s in vogue these days…

Why does the blur bother me?

The blur effect is calculated by copying, moving, and overlaying a semi-transparent version of the texture, giving the desired effect.
And this is done over multiple ‘passes’.
On low-end devices, this can negatively affect your frame rate (especially if you’re doing a blur on every frame!), so it’s sometimes good to look for alternatives.

• So, what do we need?

  • Unity 6
  • A Universal Render Project (Since we’re in Unity6, the URP version is 17.03)

• First step: Shader for Circles

Instead of blur, we are going to use a shader to draw our Metaballs with an opaque body and a gradual semi-transparent border (more opaque to the center, more transparent to the ‘outside’).

The idea is to define which part of our circle will be solid and which part will be our ‘area of influence’, that is, when two balls are close, their areas of influence will be added to make the linking effect appear.

For this, we have the following code. Pretty simple, but it works.

Shader "Custom/GradientCircle"
{
    Properties
    {
        _Color ("Color", Color) = (1.0, 1.0, 1.0, 1)
        _Radius ("Radius", Range(0, 1)) = 1.0
        _Smoothness ("Smoothness", Range(0, 1)) = 0.8
    }
    
    SubShader
    {
        Tags {
            "RenderType"="Transparent" 
            "RenderPipeline"="UniversalPipeline" 
            "Queue"="Transparent"
        }
        LOD 100
        Blend SrcAlpha OneMinusSrcAlpha
        ZWrite Off
        
        Pass
        {
            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

            struct Attributes
            {
                float4 positionOS : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct Varyings
            {
                float2 uv : TEXCOORD0;
                float4 positionCS : SV_POSITION;
            };


            CBUFFER_START(UnityPerMaterial)
                float4 _MainTex_ST;
                float4 _Color;
                float _Radius;
                float _Smoothness;
            CBUFFER_END

            Varyings vert(Attributes IN)
            {
                Varyings OUT;
                OUT.positionCS = TransformObjectToHClip(IN.positionOS.xyz);
                OUT.uv = IN.uv;
                return OUT;
            }

            half4 frag(Varyings IN) : SV_Target
            {
                // Normalize UV coordinates to range from -1 to 1
                const float2 uv = IN.uv * 2.0 - 1.0;

                // Circle center (in normalized coordinates)
                const float2 center = float2(0.0, 0.0);

                // Distance from current point (uv) to circle center
                const float dist = length(uv - center);

                // Smooth gradient: 1.0 at center, 0.0 at edge
                // The smoothstep function produces a smooth transition between 0.0 and 1.0
                // based on the distance from the circle center and the defined radius and smoothness
                const float smooth_circle = smoothstep(_Radius, _Radius - _Smoothness, dist);

                // The final alpha value is the smooth gradient value
                half4 color = _Color;
                color.a *= smooth_circle;
                
                return color;
            }
            ENDHLSL
        }
    }
}

We create a new material with this shader and, voilà:
We have a nice radial gradient

We have a nice radial gradient

Radius and Smoothness are some parameters that we can adjust, we also have Color but it is not that useful

Radius and Smoothness are some parameters that we can adjust,
we also have Color but it is not that useful

• Second step: URP Custom Pass | Render Feature

Once we have all the spheres on the screen, we are ready to draw the effect.

To do this, we will create a second shader and a custom render pass. Well, strictly speaking two passes.

This second shader will draw depending on the amount of alpha present, in this way, we can paint the body of the Metaballs and the border.

This time, we will use Shader Graph, and we are going to call it MetaEffect:

…click me!

…click me!

With these settings (The important part: Material = Fullscreen):

And with the following properties:

The Border Color and Body Color parameters represent the respective colors of the border and the body.
The Min Alpha Threshold defines the minimum alpha value that will be considered as part of the border, while the Body Threshold specifies the minimum alpha value that will be classified as part of the body.
Any alpha value that falls between the Min Alpha Threshold and the Body Threshold will be regarded as part of the border.

Now, for our custom render pass!

In the first pass, we draw our semi-transparent balls on a temporary texture.

There are some caveats though.
On one hand, we don’t want to show our original balls, just the effect of it.
And we don’t want to apply this to everything that is on the screen, only to our balls.

To do this, in our render pass we will work only with a specific layer, so we will create a FilteringSettings to only select a specific layer during the process:

// Settings to filter which renderers should be drawn
private readonly FilteringSettings _filterSettings = default;
...

_filterSettings = new FilteringSettings(renderQueueRange, layerMask);
...

private void InitRendererLists(ContextContainer frameData, ref LayerRenderPassData layerRenderPassData, RenderGraph renderGraph)
{
    // Access the relevant frame data from the Universal Render Pipeline
    var universalRenderingData = frameData.Get<UniversalRenderingData>();
    var cameraData = frameData.Get<UniversalCameraData>();
    var lightData = frameData.Get<UniversalLightData>();
    var sortingCriteria = cameraData.defaultOpaqueSortFlags;
    
    // Create drawing settings based on the shader tags and frame data
    var drawSettings = RenderingUtils.CreateDrawingSettings(_shaderTagIdList, universalRenderingData, cameraData, lightData, sortingCriteria);
    
    // Create renderer list parameters,
    // here we are using _filterSettings, and that is where we specified the layerMask to use
    var param = new RendererListParams(universalRenderingData.cullResults, drawSettings, _filterSettings);
    
    // Finally create a RenderListHandle 
    layerRenderPassData.RendererListHandle = renderGraph.CreateRendererList(param);
}

Then our first pass would look like this:

private static void ExecuteLayerRenderPass(LayerRenderPassData data, RasterGraphContext context)
{
    // Draw all renderers in the list
    context.cmd.DrawRendererList(data.RendererListHandle);
}
...

// Set up the layer render pass
const string layerRenderPassName = "Mat2Layer: Layer Render 1/2";
using (var builder = renderGraph.AddRasterRenderPass<LayerRenderPassData>(layerRenderPassName, out var passData))
{
    InitRendererLists(frameData, ref passData, renderGraph);
    builder.UseRendererList(passData.RendererListHandle);
    
    // Set up texture dependencies
    // We are not really using 'srcCamColor' on this pass,
    // but we are going to keep the next line for clarity and documentation... 
    builder.UseTexture(srcCamColor); 
    builder.SetRenderAttachment(temporaryHandle, 0);
    builder.SetRenderAttachmentDepth(srcCamDepth);
    
    builder.SetRenderFunc((LayerRenderPassData data, RasterGraphContext context) => ExecuteLayerRenderPass(data, context));
}

Here is worth note that we follow the structure of:
a.- setting a source (with builder.UseTexture(srcCamColor)),
b.- setting the destination (with SetRenderAttachment(temporaryHandle, 0) and SetRenderAttachmentDepth(srcCamDepth)),
c.- doing some work (with builder.SetRenderFunc)

But, as you can see in the comments, we are not really using srcCamColor (ie; ‘what is rendered in the camera so far’), instead we are instructing our pass to use our FilteringSettings defined onto RendererListHandle, in this line: builder.UseRendererList(passData.RendererListHandle);

In the second pass, we apply the second shader to what was drawn in the previous pass (stored in our temporary texture handle called temporaryHandle) with BlitTexture.
And we draw the result on the screen.

Something like this:

private static void ExecuteBlitPass(BlitPassData data, RasterGraphContext context)
{
    // Blit the source texture to the current render target using the specified material
    Blitter.BlitTexture(context.cmd, data.Source, ScaleBias, data.Material, 0);
}
...

// Set up the blit pass
const string blitPassName = "Mat2Layer: Blit Pass 2/2";
using (var builder = renderGraph.AddRasterRenderPass<BlitPassData>(blitPassName, out var passData))
{
    // Configure pass data
    passData.Material = _material;
    // Use the output of the previous pass as the input
    passData.Source = temporaryHandle;
    builder.UseTexture(passData.Source);
    
    // Set the render target to the original color buffer
    builder.SetRenderAttachment(srcCamColor, 0);
    builder.SetRenderAttachmentDepth(srcCamDepth);
    
    builder.SetRenderFunc((BlitPassData data, RasterGraphContext context) => ExecuteBlitPass(data, context));
}

All the details are in MaterialToLayerRenderPass.cs take it a look, there are a lot of comments!

What is left is to create a new Layer, and assign our balls to it.
Then, we will unselect that Layer from the rendering process (since we are going to take care of the drawing itself)

• Unity Scene

In our scene, we will start with a plane to which we will apply the material that we made from our GradientCircle shader.

Then, we will place these elements on a new Layer called Metaballs.

On the other hand, we will have our UniversalRendererData (We can start with PC_Renderer, but remember that if we’re going to use this on mobile, then we’ll have to modify MobileRenderer) here we will first deselect the Metaballs layer, with this, the camera will not draw our balls on the screen!

After that, we add a new render feature; ours!

So we start looking for it:

And we add it, and configure it!

We configure it by telling it that:

  • -Render Queue = All
  • -LayerMask = Metaballs
  • -Material = MetaEffect
  • -RenderPassEvent = After Rendering SkyBox.

• Results

So, with one Metaball we should see this;

But, more of them;

And if we go to the Frame Debugger we will see what is happening behind the curtains:

First pass

First pass

Second pass

Second pass

And since we are here, let’s take a look at the Render Graph Viewer

Mat2Layer passes 1 and 2&hellip;

Mat2Layer passes 1 and 2…

• Conclusion: Blobs and Beyond

And there you have it. We’ve ventured from abstract equations to smooth, organic shapes dancing across your screen. Metaballs may seem like simple blobs at first, but behind them lies a world of blending functions, threshold values, and a bit of computational magic. The next time you see those mesmerizing, fluid visuals in games or animations, remember that it’s more than just pixels - it’s a mix of math and code working together. I hope you enjoyed reading this tutorial and, as usual, comments, doubts, questions, suggestions, etc… are all welcome!

• Source Code

Here URP Metaballs :) Until next time!