Rendering Nodes

<
>
July 24, 2021

Until now rendering was implemented by implementing render() functions for different types.
For example to render an Item as part of a Structure‘s content:

    fn render_item(
        &self,
        d: f32, //depth
        m: &[f32; 16],
        v: &[f32; 16],
        p: &[f32; 16],
        item: &Item,
    ) {
        match item {
            Item::Material(mat) => match mat {
                Material::Ore(o) => match o {
                    Ore::Copper => self.shapes.coppero.render(d, m, v, p),
                    Ore::Iron => self.shapes.irono.render(d, m, v, p),
                },
                Material::Plate(pl) => match pl {
                    Plate::Copper => self.shapes.copperp.render(d, m, v, p),
                    Plate::Iron => self.shapes.ironp.render(d, m, v, p),
                },
                ...
            },
            ...
        }
    }

The direct rendering was pretty simple to implement but has several drawbacks:

  • Transformation matrices have to be passed everywhere
  • It’s basically impossible to have working transparency since for that the order of render calls matters
  • It’s hard to optimize / batch certain render calls to avoid context switches

To resolve this I introduced render Nodes, which only contain data required to describe the render operation:

pub enum Node {
    Text(TextNode),
    Color(ColorNode),
    Tex(Texture),
    Branch(BranchNode),
}

pub struct BranchNode {
    pub depth: Option<f32>,
    pub transformation: Option<Matrix4>,
    pub children: Vec<Node>,
}

where every Node type implements its own render() method.
Other code could now be refactored to return a Node instead of rendering directly.
Parent <-> Child relationships can be accomplished with the BranchNode, where all transformations of children are considered to be relative to the parent.