I was sitting with an empty scene open in Blender, and I was thinking about my next project. Then it struck me how vast the area of 3D modeling has become and most people don’t even know how much 3D content they see every day. There are many types of 3D modeling today. Everything from box modeling to photogrammetry.
In this article I will list 10 different types of 3D modeling. Perhaps you can get an idea of where your next project will go or maybe you will be as amazed as I am, just how much 3D is used as a tool to visualize in so many ways. These are the types of modeling we will discuss and explore:
You could argue that there are as many types of modeling as there are tools. However, I have chosen to limit myself to techniques that are recognized by a slightly wider audience than just myself. There are the techniques that I have found that seems to be generally accepted as techniques or types of 3D modeling.
All of them are viable in one way or another. It simply depends on what kind of shape and detail you are aspiring to create. Most of these types can be used in Blender. But this is not a Blender exclusive article even if that is my tool of choice. Instead, I want to encourage a broader view and see what each type of modeling brings with it.
Here I will list some areas where we might want to use these types of modeling just to give you a broad overview.
We most likely find some types of modeling used much more frequently in some of these areas than others. Let’s dive into the types of modeling we can expect to encounter.
Let’s start with box modeling. What makes box modeling its own type is that we start with some primitive object, such as a cube or sphere, and we use classic modeling tools to create a shape from it.
Related content: More than 30 Blender modeling tools explained
We have a starting point, and we work with low poly shapes to create our object. This is a common way of modeling that is quite mechanical since we control individual faces, edges and vertices. With box modeling we have an emphasis on manipulating whole shapes and larger portions of an object at a time.
Most of the time we work with faces that have four sides, we call them quads. These are easy to work with since most modeling tools are designed to work with quads. But before we use a model it is often triangulated, either by the user beforehand or automatically by the software under the hood.
This type of modeling tends to work best with hard-surface objects such as architectural visualization and man-made objects or products.
We use tools such as extruding, creating loop cuts and beveling. Box modeling is often used together with subdivision surface.
Subdivision surface is a technique that adds extra geometry in between the edges, vertices and faces that we manipulate with traditional modeling tools. The geometry that we control becomes like a cage that we used to shape the subdivided version of our object.
A subdivided low poly object becomes more rounded according to the catmull-clark algorithm. This may sound technical but essentially we just add geometry that rounds the surface of our object.
There are different schools on how to use subdivision surface. Since this is a kind of layer that is added on top of our original geometry some people say that you should never model with the subdivision surface visible just because the original mesh may become unusable without subdivision surface added. Limiting our use of the original mesh.
Others argue that it is much easier to see what you are doing and the intention is still to use the object with subdivision surface anyway.
Polygon modeling is a type of 3D modeling that is quite similar to box modelling. The difference here is that we usually start with a single vertex or simple shape without and depth to it. Then we build our model piece by piece. We often use the same tools as with box modeling, but we use them in a kind of detailing way.
The emphasis here is to work with edges and vertices a lot more. The type of objects we create with this technique still tend to be hard surface quite often but with more organic shapes.
Polygon modeling, like box modeling often has an emphasis on using quads in the topology. This is because many tools are designed to work with a quad topology.
This we create with polygon modeling may fall into the hard-surface category. But many times the kind of models we create have some organic characteristic. It could be a statue or building ornaments for example.
But it can also be some accessory, tool or other gear that we create with this technique.
Subdivision surface is often used here as well to smooth the object's geometry.
Essentially the tools used with box modeling and polygon modeling are the same, we just use them differently.
Nurbs stand for non-uniform rational b-spline. No wonder we have an acronym. With this kind of modeling we switch to a completely different kind of modeling. We create curved surfaces that we control based on control points. We can use it to create very smooth curved surfaces.
We can both interpret between points within the same curve and also create bridges between multiple curves. We can set up a net of curves that act as the edges of an object then fill in the geometry in between to create an object.
This kind of modeling is mostly used in engineering and CAD like software. Not so much when it comes to VFX and the art side of 3D.
Imagine if you have an object that you want to 3D print. In this case if you have a polygon model, that we created with box or polygon modeling and you scale it up. All those faces and triangles will start to become visible, just like when you scale up a raster based image.
On the other hand, with nurbs, we can scale up and down the model and the curves will remain smooth. This could be said to be the equivalent of vector art in 2D graphics.
Since we no longer work with vertices, faces or edges and instead use curves. This means that the tools are very different.
We may have tools that open or close a curve or create a new curve that interpret between two other curves. But we also have tools that are very similar like moving control points, scaling and rotating.
Sculpting takes us back from the engineering part of 3D modeling into the generally more artistic leaning side. Sculpting uses vertices, faces and edges, just like box and polygon modeling. We use sculpting to separate the shaping process from the more technical details of worrying about the individual elements. Instead of manipulating based on selection we have brushes. The brush has an influence area and more organically reshapes the geometry based on the brush type and settings.
Sculpting is generally used with character, animal or creature design. But can also be used to sculpt detail that would be hard to create with traditional box and polygon modeling.
There are different types of sculpting. We may sculpt on the mesh as it is and this would move the vertices, edges and faces around to shape according to the brush. Using this method we need to have very much geometry available from the start, or we will soon reach the limit of how much detail our geometry can hold.
The next technology we call multi resolution. It is similar to subdivision surface, the difference is that we can store the sculpt between each level of multires. Once we reach the limit of how much detail our geometry can hold, we increase the multires level by. This way we get more geometry as we need it, and we can store sculpt on multiple detail levels.
The next technology iteration is called dynamic topology. At least in Blender. This feature dynamically subdivides the mesh into triangles as we paint depending on the zoom level or a predefined absolute level. This way we just keep sculpting and the geometry will adapt.
When we are done sculpting we need to make the mesh usable again. After a sculpting session the mesh is often in a very bad condition in terms of performance reasons and workability.
Sometimes we can accomplish a better mesh automatically through different remesh algorithms that can the surface of the object and apply a new mesh on top of it. Many times thought we have to go through a process called retopology and manually recreate the mesh on top of the sculpted object.
Photogrammetry is yet another completely different way of generating 3D models. With this technique we use a camera and photograph an object multiple times from all angles in a lighting condition that is as even as possible. Then we feed these images into a program that interprets them and generates a 3D representation of the object.
There are obvious advantages and disadvantages here. We get real world data meaning that whatever we create is bound to be close to realism. Many times we get textures and UV Maps generated in the process, so we don’t have to spend as much time on these areas as well.
However, much like sculpting the mesh need to be reworked either by remesh or retopology. This means that we may need to recreate the UV Map as well.
There will also be extensive cleanup work to do since the camera will catch not only the object in question but also the surroundings.
Another downside is that we need to have the object available to photograph it, and we need to put it on a surface meaning that part of the object will be unreachable for us. For instance a rock will have to lie down as we photograph it and the underside is not accessible during a single photo session. This will result in holes in our mesh that we have to deal with in some way.
Photogrammetry is a relatively new invention that has gained lots of traction lately. We can’t only photograph small objects. We can also use a drone to photograph a whole area and recreate larger structures.
This is good news for preserving old buildings or to study an area faster.
There are also scanners that can be used to scan an object or area much like sonar works. The data can then be fed through a software to recreate a 3D map.
There are many kinds of digital simulations. Here I will list a few.
Each of these has its own purpose. Most of them also have multiple purposes as you probably can imagine. When we simulate something we create a setup with different objects and parameters that will interact with each other over time. The computer calculates how things will move and what will happen for each frame we run the simulation for.
We can then use the result to create animation but also to create a scene or objects based on simulation rather than the raw manual input from other modeling techniques. Imagine if you were to create a wave splashing on a rock. You may model or use photogrammetry to create the rock but the wave is more difficult. You may be able to sculpt it, but it would be far more convenient to run a simulation and have it splash on the rock by itself creating the shape based on parameters such as the angle the wave hit the rock, the size and velocity of the wave and so on.
Similarly, we could use a physics simulation in combination with a soft body object to create a car crash. Instead of having to model every frame by hand.
Another example would be a cloth simulation. You could sculpt the pillows for your next architectural visualization scene or you can use a cloth simulation to create it with all the wrinkles included.
Simulations lean much more towards VFX than for instance nurbs. But we can still consider it a modeling technique since we create or deform objects with it.
Simulation is a much more technical type of 3D modeling. Since we mostly tweak and fine tune parameters rather than directly focusing on the shape.
Procedural modeling come in many shapes and sizes. I will divide this into two different types of modeling. The first one is tool based. We or someone else created a tool that is designed to procedurally generate a bunch of similar objects. For instance, we could have a building generator. We could then input a bunch of parameters like, how many floors, how high the ceiling should be and what kind of roof shape it should have. Then we run the program a number of times and for each time through, a new model that follows our criteria is spit out.
There are many such tools for specific types of models, and we can also create our own model generators and expose certain parameters that for the kind of model we want the tool to output.
The next kind of procedural modeling is closely tied to shading. A shader can have a displacement output and through this displacement we take s simple primitive such as a sphere or a plane, and we use mathematical formulas to deform the surface to become a complex object or surface.
This is a trend that has grown as more and better tools have become available to displace geometry through shading. Both traditional displacement that works on a single up and down axis and vector displacement are available. Vector displacement can displace geometry in all directions creating very advanced objects from simple geometry.
With boolean modeling we start with a model and cut away or add other object to it to create a new shape. This is closely tied to box modeling, and we often use the two techniques together.
Normally we model basic shapes with box modeling and then combine different shapes with boolean operations. The operations we have to work with are:
The difference operator is the most common. This is the operator that cuts away the shape and volume of one object from another.
Union will merge two objects together and intersect will save only the geometry that two objects share.
Boolean can help us create shapes that would otherwise be time-consuming to mimic with other modeling techniques. We can combine circular or bent shapes with square hard-surface shapes and cut away or add these together.
This is another type of modeling where we start with a kit of objects that we combine into more detailed objects. Or we may use kit bashing to detail an object that was made with some other type of modeling.
Kit bashing is also very common when creating hard surface objects. It allows us to explore how different pieces could fit together without needing a complete picture of what the final piece will look like.
Kit bashing is excellent to detail a scene. When using kit bashing one should keep in mind the ratio of high frequency detail, middle frequency and low frequency detail. Well composited shots usually have a good mix and arrangement between different distributions of detailing.
This is true both for hard surface and organic modeling. For instance a fictional robot may have more detail around what should be perceived as the head or focus point while a forest may have different distribution of plants, trees and mushrooms depending on where each spice most effectively would grow. Some are evenly spaced across the scene while other are clumped together or concentrated to a specific area of the scene.
This is not really a modeling technique, but a good practice. When creating 3D assets it is a good idea to keep modularity in mind. It may be that we are creating a cityscape. We may need to model multiple buildings that look similar. In that case we should think about modularity so that we can reuse certain parts of one building in the next.
We can even go so far as to model different building sections that we can rearrange in different ways to create variation.
When deciding on what type of modeling to use we need to think about what end result we are aiming at. But in most cases it is going to be a combination. Especially if we are creating a scene. In those cases we may have some objects requiring some techniques while other objects will require others.
If you are a beginner artist I would suggest to start with box modeling and polygon modeling, simply because it is the same tools used and these techniques are the foundation of all modelling. But if you want to niche in on 3D printing for example, nurbs modeling might be where you should start.
I hope you found this content useful. Please help me share this. It helps a lot.
Thanks for your time.