Pink textures in Blender and how to avoid them

Not too long ago, I was revisiting one of my older Blend files. I browsed the archive and finally found it, opened it up and had a look. I quickly realized that something was wrong. The scene was a splash of pink shades, lighting up the room intensively. Any passersby outside my window would surely suspect a small scale cannabis farm in there. If they didn’t know about missing textures in Blender that is…

Pink surfaces means that Blender could not find the texture file. In most cases, you can go to file -> external data -> find missing files and browse for the location to retrieve them. Blender will search the folder you select and its sub-folders to find the missing textures by file name.

Find missing files, replace blenders pink textures

The scene I was opening was from November 2016 and it looked like this. A project I made for an old Blenderguru contest. It didn’t go well in the contest, but it was one of my most major 3D art accomplishments until that point.

If your scene has one or two missing texture files, you may just browse for them. But if there are anywhere between a handful and a hundred scattered across your hard drive, that could be hard work to find through browsing.

Instead of just browsing and looking through the interface to find the files that may be missing. Go to the file -> external data -> find missing files and hit “report missing files”. Blender will search through the external files it is sourcing and list them in the info editor.

When you open the info editor, you will see something like this.

Keep in mind that these are the file locations where Blender thinks the files should be, but they are not there. However, we can learn a few things from this report. We see the filename, and we may see parts of the file path we may recognize. For instance, I have a folder called textures and one called resources on my network drive. With this info, I may find the location of these files.

Go to file -> external data -> find missing files and browse for the location to retrieve them when you have figured out where they are.

But why did this happen in the first place?

Absolute vs. relative file paths

It has to do with file paths. It may sound technical, but bare with me.

There are two kinds of file paths. Relative and Absolute. An absolute path is probably the one you may be familiar with. On windows, an absolute file path may look like this:


It includes the whole file path from drive-letter to the final file. A relative file path is a path that is in relation to some other file or directory. It may look like this:


The double backslash showing that the path is relative. The full absolute path for this file could then look something like this:


The blendfile is configured to use an absolute path or a relative path. By default, blender uses relative file paths. You can change this by going to File->External data and click ether “Make all paths relative” or “make all paths absolute”.

There are advantages and disadvantages to both types of paths. We should use a relative file path when we have a project with all the files associated with it contained in one folder. One or more blend files and possibly sub-folders for our textures, hdri maps, and rendered images could act as a self-contained project.

However, if you are like me and have been using Blender for a while you may have a library of assets you want to bring in to your project and source directly from the library. If that is the case, we should use absolute file paths.

I will now describe the setup I used to avoid pink textures while using a local library for speed and also be able to export projects that are self-contained.

How to store external assets

I prefer a local asset library. I store it on a NAS device since I work with both my laptop and desktop and want to reach the library from both machines. I map the NAS folder where my library to the same network drive letter on both my machines. This way I have the same absolute path on both machines. In my case, I use K:.

I then have Blender setup to use absolute paths by default. This way, I am free to move the Blend files around and they won’t lose contact with external assets. With relative file paths, I would lose the link between the assets and blend file if I moved either the assets or the blend file. I use a folder structure like this to keep organized.

  • Textures
    • Texturehaven
      • Wood
      • Concrete
      • Fabric
      • etc…
    • Artisticrender
      • Bricks
      • Wood
      • etc…
    • etc…
  • HDRI
    • Hdrihaven
      • Interior
      • Exterior
      • etc…
    • Artisticrender
    • etc…
  • Models
    • Artisticrender
    • Chocofur
    • etc…

First, I have the type of asset, then the name of the service or provider, either from myself or some library on the Internet. The reason for this is that within each provider, there may be different licensing, and for some projects, I may not be able to use a certain license. Then, within each provider I try to stick to their own naming scheme. So if a texture is categorized as wood in the original library, I will categorize it as wood within that provider folder.

Now, I don’t download full libraries of textures. When I need a texture, I will just browse around on the websites I know and think has good textures and just download the ones I need. When I find what I need, I just search my local library for the asset and if it is there, I just use the local copy. If not, I download.

This is the basis of the library structure. The next problem is whenever you want to share a self-contained project.

Sharing a blend file with external data

There are two ways.

The first is to pack the external files into the blend file. The blend file will then act as a container and store all external files within itself. The blend file can become huge, but the benefit is that it does not depend on any other external files. To do this, go to file->external data and select “pack all into .blend”.

You can also check the “automatically pack into .blend” but this would defeat the purpose of not having duplicate files when we don’t need to. Now we can share the blend file. I use this for quick shares when I just want to get the file sent.

For more professional projects and long-term sharing goes like this.

  • Copy the file
  • Open the copy
  • Pack into blend
  • Unpack with “use files in current directory” setting.
    • This creates a subfolder called “textures” and sources textures from there.
  • Use relative file path
  • Zip the .blend and texture subfolder
  • Share the zipfile

This way I get duplicate data. But it is only temporary. When I archive this project, I can delete any shared versions and keep my original files sourcing from the library. I also have the possibility to add notes and change project details before I share it.

Final thoughts

We started with blenders pink textures and ended up with a complete system. Having a good structure for your assets is just good practice, and it helps to avoid more problems than just textures not sourcing correctly. Just being aware of relative and absolute file paths can get you a long way to understanding why life sometimes is an untangled mess of corrupt data paths.

I hope you enjoyed reading, and as always, please share this so that others may benefit. It also helps me to grow the website and provide more and better content. If you want more, you can also consider subscribing to the newsletter and for feedback or questions, comment below.

Enjoy your day!

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

5 cool camera tricks in Blender 2.80

The camera object as some awesome settings to get just the right angle, focal length and depth of field, adding that nice blur and bokeh that everybody loves in photos. Today I wanted to focus on some ideas around the camera. 

In an earlier article, we tackled the movement of the camera and viewport navigation. In this article, we continue with corresponding real world camera settings and continue with some special cases. 

Here is the table of content if you will: 

  • Camera settings 
  • Viewport display helpers 
  • Isometric camera setup 
  • Turntable camera 
  • 360 degree camera 
  • Fisheye Lens 

Blender camera settings 

Blenders camera object is not that different from a regular camera. Since our built in render engines, Eevee and Cycles aim at physically correctness it is not very shocking that the camera should mimic the real world. 

While the more camera specific settings live within the camera object, settings related to output lives within the render settings. 

Here we can set the resolution and the aspect ratio. Leave the aspect ratio set to one and adjust the resolution for your needs. We can change the percentage value to render a smaller or larger version of the same view. This is useful primarily for creating preview renders quicker. 

With resolution and aspect ratio out of the way, select the camera and move over to the object data tab for a quick rundown. We will look at the most useful settings and then dive into some special cases further down. 

We will start with the first section called lens. Here we can set the type of camera we want. Perspective is your typical camera you have everywhere, from your webcam, to mobile phone and to your most high-end film camera. 

The next type is orthographic. This type eliminates all perspective and lens distortion. We can use this head-on view for more technical applications. For instance, rendering a blueprint, or an isometric view that is common when making certain kinds of low poly art. 

The third option is for extreme lens distortion instead. It’s called panoramic. This is for emulating a fisheye lens or a 360 degree camera. A common use case is VR. 

We use the perspective type in most cases, so let’s stick with that for now. 

Next is focal length. This is your typical camera zoom. For a DSLR camera, your typical kit lens would be something like 18-55mm zoom. If we know little about real world cameras, this may not make very much sense to us. How much zoom this really is depends on the size of the camera sensor. We can set the sensor size under the camera section. Sensor fit should be horizontal if you are trying to match a real world camera. Height will gray out and width will be the value to change to match a real world camera. 

Two common terms in the DSLR world are a full frame sensor or a crop sensor. A crop is somewhere around 23mm and a full frame is around 35mm. But for your smartphone, the sensor could be somewhere along the line of 4.5mm. The focal length will have to vary between these sensors for a similar result. 

So why do we care? There is a relationship between the sensor size and the focal length. Keep this in mind if you expect a certain look and you can’t get it with the focal length you try with. With a real camera, the sensor size cannot change. All we have to work with is the focal length. But in Blender, we have all the flexibility in the world. Just carving away or gluing on to our sensor with the help of a slider. That is the power of virtual.

In the camera section, we also have a whole range of presets. Click on the lines to the right from where it says camera. There are some common DSLR sizes, some phones and other devices. Keep in mind, those presets that have a fixed lens will not only adjust your sensor size but also the focal length value. 

Let’s leave focal length and sensors for a while and move on to depth of field(DOF). Depth of field is really referring to the aperture of a real world camera but also the focus point. 

In this section, we can set an object we will focus on. This could be an empty we can move around. The second option is to set a distance from the camera. 

Next, the subsection aperture. F-stop is the standard measure used by photographers when talking about aperture. Aperture refers to how wide the hole is in the lens to let in light. The wider the hole, the smaller the f-stop number and the shallower area will be in focus. 

In the real world, an aperture of 1.2 would be a wide open gap. But depending on the scale of your scene, you may have to go to the extreme sometimes to get the DOF you want. Don’t be afraid to try a number like 0.1 just to see how it looks. 

One key difference between a real world camera and Blenders camera is that the aperture only controls the DOF and not how much light is coming in through the lens. We have no shutter speed. Shutter speed being the time the hole is open for our shot. Instead, we have light and exposure value to play with. We also don’t have an ISO value on the blender camera. 

The last setting we will touch briefly upon before moving on is the clipping settings. We find these settings in the lens section. These settings will dictate where the camera view will start and where it will end. The camera will see nothing outside this range. Just make sure that your scene fits within the clipping start and end values so that nothing gets lost. 

Viewport display helpers 

We will now jump over to the viewport display section. This section will not give us any changes to the rendered result. Instead, it contains different guides to help us with composition and setting other values. 

The size dictates how large the camera will appear in the viewport. The limits and Mist check boxes are more useful. Let’s start with Limits. When depth of field is set to a distance value, limits will show a cross where that distance is and gives us an exact focus location. It will also show the start and end of clipping.

The mist checkbox is a guide to help us tweak the mist pass for the compositor. We can use this to create depth maps or add compositing effects. To enable it, go to the view layer tab. In the passes section, check the mist checkbox. To adjust where the mist starts and end go to the world tab and find the mist section. The start and depth values will dictate the start of the mist and how far it progresses. This is where the mist checkbox comes in handy; we can see the beginning and end of the mist easily. 

From this point, it is a matter of post processing to use the mist pass. Let’s move on to the composition guides subsection. When viewing through the camera, these check boxes will draw different lines across the camera view. This can help to frame our shot according to common composition rules, like the rule of thirds or the golden ratio. It can also help us find the middle of our shot easily. 

Next we have the passepartout subsection. This controls the opacity outside the bounds of the camera while viewing through it. 

Now we have some understanding of what settings we can tweak and control through the camera object itself. Let’s continue to some different camera setups useful for different projects. 

Isometric camera setup 

An isometric camera is orthographic and has a specific angle viewing down on the subject from top right in most cases. Giving an overview shot, usually of a small low poly world. 

In the latest release of Blender 2.80, the isocam add-on is included by default. We will use it to quickly set up the camera. 

Enable the add-on from the preferences under the blender menu. Find the add-ons section and type “isocam” it filters in the list as you type. Check the box next to it. Go to the 3D viewport and hit shift+a for the add menu. At the bottom, you will find “create isocam”. The TrueIsocam option should do the job for you. Position it in your scene, all parameters should be setup correctly. 

Turntable camera 

As for the isometric camera setup, there is an add-on for creating turntable cameras in Blender. The name of the add-on is “Turnaround Camera” and bundles with Blender. Enable the add-on in your preferences. 

We find the settings for the turnaround camera in the n-panel. Under the view tab. Here you will find shortcuts to the start and end frame values and the scenes camera object. You can set the rotation axis and change some options. 

To use the add-on, set your camera to a starting position, select the object or group of objects to rotate around and hit the turnaround button at the top of the settings. 

We are halfway there. Play the animation with space or shift+space depending on your settings. You notice that the camera speeds up and slows down as it circles around. This is because by default Blender interprets animation with an ease in/ease out bezier curve. We want a constant rotation and want to set the keyframes interpretation to linear. 

To do this, select the empty object that the add-on created called “MCH_Rotation_target” This is the animated object. Open the graph edit and hit “A” to select all keyframes. Press “T” and chose linear. All keyframes now has a linear interpretation, and our camera will rotate at a constant speed. 

360 degree camera 

Let’s see how we can create 360 degree images or videos using the camera object. This is probably the easiest special case camera to set up. All you have to do is to select your camera and go to the camera object data settings. Then change the camera to panoramic and by default panorama type should be Equirectangular. 

With the default settings, you have a 360 degree camera. With this camera you could render video for VR, or possibly your own hdri maps for lighting. 

Note that at the time of this writing, the panoramic camera type does not work in Eevee. 

Fisheye lens & mimicking a real world camera 

If you know a thing or two about photography, you may think a fish-eye lens is configured using the normal perspective camera with a low focal length to distort the image. However, if you try this, you realize that there is no distortion happening. Instead, a fisheye lens is in the panoramic category and therefore also restricted to cycles only for now. Change type to panoramic and then change the panorama type to “Fisheye Equisolid”. Now “Lens” will be the focal length value and together with “field of view” these two settings can create a fisheye lens. 

If you tested this out, you probably realize that the perspective type of camera has no distortion. So, if you want to make the blender camera look more like a real camera, you can use a fisheye Equisolid type. This will ensure that there is some distortion, just like a real camera. The lens value can only have values between 0,01 and 15.0 if you use the slider. But you can type in numbers up to 100. 

Focal lengths of up to 100 is probably enough since the greater the focal length the smaller the distortion of the image, so for any higher focal length values you probably won’t notice the distortion anyway and are better off with a perspective camera. 

Final thoughts 

If you have read all this way, you now have a good understanding of what the camera object in Blender is capable of. We looked at isometric setup, turntable animation, 360 camera, and more. You have also learned some differences from a real world camera. For instance, the lack of distortion in a perspective type camera and that the aperture of Blenders camera does not influence light. 

If you liked this article, please share it. You can also consider joining the newsletter and comment or leave feedback on the article.

Thanks for your time. 

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

How to move the camera in Blender

It hit me that to get started, how to move the camera in Blender is the most essential thing you will have to learn. In Blender there are two different entities we could refer to when talking about the camera. The viewport navigation and the camera objects within our scene. Both are important. The viewport navigation though, is the first of the two hiccups.

Navigate the viewport by holding the middle mouse button (scroll wheel) and moving the mouse. Holding ctrl will zoom and shift will pan. For camera object movement, the easiest way is to press “N” find the view section and check “lock camera to view”. Then press “number pad 0” to go into camera view, move as with viewport navigation. 

Viewport is what we see through into the 3D world. A camera object shows how and what we render in our final image. The camera is a much more advanced concept compared to viewport navigation.

If you are looking for something more advanced, check out:

5 cool camera tricks in Blender 2.80

How do we control viewport navigation?

To move around in the 3D world, we use rotate, pan, and zoom. Hold the middle mouse button while the mouse is in the 3D viewport and move the mouse to rotate around. Hold ctrl for zooming and shift for panning.

When zooming and rotating, we do so in relation to an invisible point in space. We zoom in and out in relation to a point and rotate around that same point.

Panning allows us to move the position we are viewing from, but it also moves the invisible point with the camera as we pan.

There are other ways to move the invisible point. For instance, we can hold both shift+ctrl together with the middle mouse button pressed and move the mouse. This will move us through the viewport like zooming. But in this way the distance between the view and the invisible point will stay constant. We are not zooming towards a point, instead, pushing the point forward as we move the view back and forth.

Sometimes we want to move the invisible point into a specific location where we want to work. This is easiest done by selecting an object and press the period on the number pad. This will zoom the view to the selection and move the point we zoom and rotate in relation to.

A similar operation is shift+c. This shortcut will zoom the view so that the entire scene is in focus and the median point of the scene will be the invisible point. This is very helpful when your scene is concentrated in one location and not scattered around. If it is, or you have stray objects far away in your scene, you will get much further zoomed out.

Sometimes we want to have a perfect view from an angle. For instance, a front view, top view, or side view. You can do this with the number pad. The number 1,3 and 7 will move us to these specific angles. Using these hot keys Blender will also put us in an orthographic view. Orthographic is a perfect head on view without the perspective. Similar to how you would view a blueprint.

As we rotate our view out of one of these views, Blender will turn perspective back on automatically. We can toggle orthographic and perspective view with number pad 5 as well.

With Blender 2.80, we can also hold ALT and drag our mouse up, down, left, or right. This will bring us into the closest perfect viewing angle and switch to an orthographic view. Continue to hold ALT and drag the mouse to switch between views with 90-degree increments.

In the top right corner of the 3D viewport, there is also a gizmo and a handful of icons. We can use these for navigation. If you ever forget a shortcut key, or perhaps you are using a touchscreen, click or click and drag to use the widgets.

The last two options for viewport navigation are fly and walk mode. These are not usually useful for your general moving around action, but they can come in handy when positioning a camera object later.

The walk and fly navigation modes have no shortcuts pre-assigned. Instead, we will have to use the search menu. You may have it set as your space bar, but if you don’t, you can reach it with “F3”. Type in “walk” or “fly” for the corresponding navigation types. I will not cover fly navigation since the only benefit I can see is that you could animate some camera motion with it. If you disagree or know some secret I don’t, please let me know in the comments below!

Ok, over to walk navigation. When selecting walk navigation from the search bar, you will get into a kind of walk mode. You use the classical fps game style navigation of WSAD to move around, change direction by moving the mouse. We also use Q and E to move straight up and straight down. We can hold SHIFT to increase the speed.

Next up, a handful of settings for movement.

Settings related to navigation

We can find the settings in the edit menu. Go to user preferences and find the navigation section.

There are quite a few settings related to navigation. However, there is only a couple I think is useful. “Orbit around selection” is the first one, and the name speaks for itself. Instead of using an invisible point, our selection will be that point.
To change this behaviour, tick the “Orbit around selection” checkbox. Blender save automatically. Now try to select something in the scene and you should be orbiting around it when you rotate the view.

The second setting is in the same place. Just a little lower in the zoom subsection, you will find “zoom to mouse position”. This will zoom to wherever your mouse is instead of just towards the middle.

I normally don’t use either of these settings but I know that many people find them useful, try it out and make your own decision.

How do we solve navigation issues?

We may work at our project and suddenly out of nowhere we cannot move the camera as we expect or we just can’t see what we are doing. There are some common scenarios you might stumble upon in the beginning. We will try to pick out a few and come up with solutions.

The first one is clipping. The viewport camera will display nothing to close or far away from the camera. We can change these distances. If you come across this problem, hit “N” to bring up the n-panel (what is it called really?). You may have multiple tabs just to the right of the menu if you have some add-ons enabled. In that case, go to the “view” tab. It should be the default one, but you never know. Go to the “view” section and find the clipping settings.

There are two values. Clipping start and end. Adjust these until you see as much as you need to see. For me, the clipping value is too high by default, so I normally change this to 1 centimeter or even 1 millimeter.

An even more common problem for beginners is that you navigate yourself away from your scene and get lost in space literally. When this happens, use shift+c as we talked about earlier. It will zoom you back to where your objects are.

The third and last problem we will cover is that you may come to a point where you can no longer zoom. This is because you are too close to the invisible point you zoom towards. Here, shift+c will save you again. You can also select an object and hit number pad period to zoom in on that specific object. This also works in edit mode if you want to zoom in on a specific edge, face or vertex, or collection of elements.

How do we control the camera object?

If viewport navigation is just at the beginning of the learning process, controlling a camera object comes at the end. At least in a workflow sense. It is not until we hit render and have Blender calculate our final image we use the camera object.
Now that is a shallow viewpoint. Don’t forget that we need to check and double check what is inside our frame throughout the project to get an idea of what it looks like. Remember, only those things we can see matters. The camera is very important.

Let’s get started. To align the viewport with the camera, hit number pad 0. Now we can go back into the n-panel and find the view section again. This time, look for the subsection “view lock” and find a checkbox called “lock camera to view”. This will allow us to navigate the viewport like we did before, and the camera will follow. This is probably the easiest way to align and adjust the camera for a still image.

While we lock the camera, this could be the time for the walk or fly navigation if we want to control the camera in this way instead. When the camera is in position, don’t forget to uncheck “lock camera to view” before you continue to navigate and mess up the camera position you just made. These camera movements live outside history, meaning we can’t use ctrl+z to undo them. Keep this in mind so you don’t accidently ruin the perfect camera position.

We can also do the opposite, namely pick a view and move the camera. When we have positioned our viewport, we can then hit ctrl+alt+numpad 0 to move the camera to the current view instead. Then we can just fine tune the position.
Another possibility is to first select the camera, then press number pad 0 to go into the camera view. Since we now have the camera selected, we can use the transformations used for moving and rotating any other object. For obvious reasons, cameras can’t scale. Tap “G” to pan the camera and “R” for rotation. To constrain the movement to a single axis, follow up with X, Y or Z. To move the camera forward and backwards for instance, press “G” and then “Z” and move the mouse.

Another neat little trick that is also available for any object is to press the transformation (G, R or S) shortcut key and then press and hold the middle mouse button. Depending on how you moved the mouse before you release, it will constrain the transformation to the axis your mouse moved closest to before you released.

Wrap Up and where to next

This was the basics of how to move the camera in Blender plus some extra little tips for those that like to get ahead. If you want to know more about Blenders camera object and what it can do, you can read this article. It describes more settings related to real world cameras and also how to set up some different scenarios like turntable animation, isometric cameras and 360 degree camera for VR.

If you enjoyed this article, please consider sharing it with someone you think would benefit. It means the world. Also, you can sign up to the newsletter to get future updates and some extra perks and offers from time to time. It is now also possible to leave a comment on our articles. Scroll down if you have feedback to give, any question or just want to say hi.

Thanks for your time, and I hope you learned something.

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Boolean modifier problems and how to solve them

boolean modifier feature image

Booleans has really risen in popularity since new hard surface add-ons have popped up left and right. Though every time I use a boolean in Blender, I always have this feeling of incoming crashes and meshes that just won’t do what I want it to. So I thought an article outlining some common solutions to boolean modifier problems may be of some help, so I took my experience, did a little research and ended up with this post. 

To solve most boolean problems, we can just move the object a little in any direction. Sometimes you may have multiple meshes in the same object cutting into a base mesh. Separating these and making sure that scale is applied and having normals facing outwards is a good troubleshooting start. 

Note: I will call the object with the modifier on it the host object or base object. I will also call the object targeted by the boolean modifier; target or boolean object. 

boolean modifier problem feature image

What does the settings on the boolean modifier do?

The boolean modifier takes another object as input and does some operation on the volume shared between the meshes. The following are the possible operations. 

  • Difference
    • Difference is the most commonly used operation. It takes the target object and subtract whatever volume it holds from the base object. Cutting into it. 
  • Union
    • Union will join the target object to the base mesh and merge the geometry of the two objects. Any faces that would have remained inside of this newly joined object is deleted leaving a manifold (watertight) mesh. 
  • Intersect
    • Intersect removes all the volume that is not shared between the two objects. 

We also have the overlap threshold value slider. This is one of our defences against a misbehaving boolean. We should keep this value at an all-time low. The default value is 0.000001meters, and that is a good value to start with. However, if you have faces that are just barely overlapping this value can be used to either ignore those barely overlapping parts or tweaked to just precisely leave them within range to be included. 

Having too high a value can in my experience sometimes cause loose edges being generated that shoots off in different directions. If you need to apply a boolean modifier, check wireframe viewport mode first and see if you can spot any details or loose edges that float off. 

boolean modifier settings displayed

Boolean limitations

According to the Blender manual, there are some limitations to the boolean modifier. Booleans work best on manifold or watertight meshes. The manual doesn’t state that all circumstances except watertight meshes will fail but not making sure that this is true can’t guarantee a trouble-free operation. 

The manual continues to list four key troublesome scenarios. 

  • Overlapping volumes
  • Overlapping geometry
  • Self-intersections
  • Zero-area-faces

How do we troubleshoot boolean modifier problems?

There are a handful of tricks we can try if we run into an uncooperative boolean. I usually have a few steps that I go through whenever this happens. The first steps make sure we have a watertight mesh. 

Make sure you have applied scale; we do this using ctrl+a and select scale from the menu. This just tells Blender that the scale of this object as it is right now is the new starting point. 

I would then usually try to remove doubles. In 2.80 and later this operation is now in the “merge menu”, hit alt+m and select “by distance” and if two vertices is on top of or close to each other, they will merge. This just makes sure we close any potential gaps in the mesh and it takes care of zero area faces. 

Blender will give you info about how many vertices we removed and if it seems like an unreasonable amount, undo the operation or adjust the threshold in the operation settings down in the left corner. In fact, I would suggest that if any of these steps does something unpredictable, try to find out why. 

Next, we want to make sure we have consistent normals. We do this by going into edit mode, select everything and hit shift+n. Most of the time, Blender will make a good job recalculating the normals so they all face outwards. 

If you still suspect normals not being consistent, you can turn on the normal direction view for faces. In the overlay menu in edit mode, find the normals section and click the face icon. Then increase the size until you see lines drawn from the faces. This will help show the direction of each face normal. Make sure they are consistently outwards facing. 

Go through these three steps for both the base object and the target. When done, your objects qualify for a boolean operation. 

When both objects qualify and if we still have problems, there are still steps left. 

We can move the object just slightly so that the objects intersect in a cleaner way. Avoid very slight intersections. Also, avoid edges moving alongside each other just a short distance apart. 

Another thing to keep in mind. Booleans work best when the meshes don’t differ too much in density. If one object has millions of polygons and the other just a handful blender has to calculate the transition between these two extremes, this can cause trouble. 

I sometimes see multiple meshes in the same target object. In those cases, separate the target object into multiple objects and use a single boolean modifier for each of them on the base mesh. You can separate a piece by going into object mode, select one element (face, vertex or edge) in the object you want to boolean. Hit CTRL+L to select linked elements. Then hit “P” and separate by selection. 

Add a new boolean modifier on the base mesh with this separated piece as the target. 

We can now check for loose geometry. You may have stray edges, faces or vertices that are just hanging in the air causing problems. In these cases, you can select the target object, go into edit mode, select one element and use CTRL+L to select linked. Then use CTRL+L to invert the selection. If you have any stray geometry floating around, we will now have it selected. Delete anything that does not belong in your object. 

If all else fails, apply the boolean modifier and see what the result looks like. When we do, it may become obvious to us why the modifier is not working properly. We may also find out that some manual cleanup is way faster than trying to solve the actual problem. In those cases, do the cleanup and save yourself some hassle. 

Are there any add-on that could help us?

Some workflows are centered on boolean operations and adding all those booleans manually would just be a nightmare. Luckily, there are some add-ons available to help speed up a boolean-based workflows. However, if you are just making an occasional boolean once in a while you probably won’t save that much time. 

For the occasional boolean operator, there is the built-in add-on called Booltools. Enable the add-on in user preferences. This add-on adds shortcut keys for quick boolean operations. 

The most versatile shortcut this add-on has is ctrl+shift+b. This brings up a menu with all its options. The operations listed under auto boolean will make the operation on the selected object with the active object acting as the base mesh. It will also apply the modifier. The operations listed as brushes will not apply the modifier. 

Booltools also has a slice option. This will use the target object as a knife and slice where the faces intersect leaving a small gap in the base mesh. 

There are very many add-ons that deal with boolean workflows these days. The most well known is probably Boxcutter. Boxcutter is a paid add-on. With it, you just draw on the screen where you want the cut to happen. 

Final thoughts

Booleans is a fun tool to work with and adding some extra speed with Booltools is also welcome. 

While there seem to be an endless array of troubleshooting steps available for boolean modifier problems they are quick to go through once you know what steps you can take. We can also use a handful of these steps to solve a whole range of other issues. For instance, an object with scaling that is not reset will get weird results when you use certain tools on it. 

I hope you learned something. If you did or know someone else, it may be useful for, please consider sharing this with others. It means the world to me. Also, if you want an occasional update on what is happening on the site, just subscribe to the newsletter and I will let you know. 

Thanks for your time.

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Blender cloth simulation 2.80 making a thick blanket

Blender cloth simulation feature image

Making cloth in Blender using the cloth simulator is easy when working with flat objects. Making flags wave in the air and thin blankets fall on to couches is quick. But once you need thickness, everything suddenly becomes much harder.

In this article, I want to go over a way to simulate the wavy feel of a thicker blanket while preserving the shape. Using multiple tools in Blender to make cloth is the way we will go. Below, I summarize the process.

  • Create the geometry using modifiers and simple modelling
  • Use force objects and cloth simulation to create the wavy surface
  • Use a soft body simulation on a lattice, then deform the mesh according to the lattice.
  • Create the material.

Let’s dive in!

Blender cloth simulation feature image

Create the base mesh

We will start by going into edit mode with the default cube selected, add a loop cut (ctrl+r) in the middle and scale (s) it out. Use the bevel tool (ctrl+b) to expand (scroll wheel) the loop cut into 3 cuts.

Add four more cuts along the X and Y axis. Making even geometry. If you intend to create a shape other than a simple square, just keep in mind to have some density to the geometry and even quads for blenders cloth simulator to work with.

base mesh creation, viewing geometry

Array and clean up

Add two array modifiers to the cube. Increase the count to 5 on each of them, followed by changing the relative direction for X to 0 and Y to 1 for one array. The two arrays should now form a grid of our object when combined.

On each of the array, modifiers turn on the “merge” checkbox and increase the distance value until the bevelled vertices merge for each of the squares in both X and Y direction.

Apply the modifiers.

Our geometry has internal faces we need to remove. To do this, go into edit mode (tab) and deselect everything (alt+a). Use Blenders search function (F3 or space bar). Search for “select non manifold”. As you type, the list will filter.

While in edge select use the “select non manifold” function. Then, increase the selection (ctrl+numpad plus). Then decrease the selection (ctrl+numpad minus). Now remove(x) the selected faces. The mesh is now manifold and ready for simulation.

If you want, UV unwrap the mesh at this point. Personally, I didn’t for most of my tests and instead used box projection. Tab out to object mode. The mesh is now ready for simulation.

make manifold

Blender cloth simulation setup

Blender has a collection of forces that can manipulate a cloth simulation. We will use these to create our wavy look while still maintaining a flat blanket. We will adjust the overall shape with a soft body simulation later.

Let’s continue by adding forces and setting up Blenders cloth simulation.

In object mode, add (shift+a) a turbulence force. Increase its strength to about 500. Duplicate (shift+d) the force and place them at opposite corners of the cloth we will simulate.

Change the strength of one force to about 300. The strength may need some adjustment to get the desired result.

Select the cloth mesh and add a cloth simulation to it. Set the cloth simulation to the cotton preset in the cloth header. Then go to the collision section and enable self-collision. Also, go to the field weights section and change gravity to 0 so that our cloth does not fall to the ground when we start the simulation.

Play the animation (shift+space) and you will see the cloth deform quite quickly.

The way it deforms, however, is on a scale that is too large. This is because the turbulence forces help change the mesh according to an invisible texture that has a certain size to it. The easiest way to control this is to scale up the whole scene significantly.

Select everything (a) in your scene and scale it up somewhere between 5 and 10 times. You may have to scale the scene multiple times and test the simulation in between before you get the scale of the cloth deformation you want. The simulation should only need to run for a second or two before you have a desirable cloth structure.

If it runs for too long, we lose the thickness and the mesh will have moved too much in the Z direction.

Add a subdivision surface modifier below the cloth simulator to get a more accurate representation of the finished cloth.

In the timeline editor, select the play-head and drag it back and forth after a simulation has been playing to pick a specific frame you are happy with. The mesh should still be relatively flat.

Save your work, then apply the cloth simulator in the modifier stack, keep the subdivision surface modifier and delete the force fields before continuing to the soft body simulation.

Soft body simulation setup

At this stage, we have a good-looking cloth piece. Now we only need to deform it so it fits in the desired environment.

First, we will add a lattice object to the scene. Scale it to so that our cloth fits nicely within. Make sure that the floor of the lattice is at the floor of the cloth object. It is fine if some part of the cloth object is not inside, but below the lattice.

Increase the resolution of the lattice to about 8 for the U and V coordinated and 3 for W. Then add a soft body simulation.

For the settings of the soft body simulation, start by deselecting the “goal” section. We use it with animation and in this case, we don’t need it. It will make the simulation stay in place in this case.

Check the checkbox for the “self-collision” section and expand the “edge” section. At the bottom also check the sub-section “stiffness” This will ensure that quad faces don’t collapse and since our mesh is only quads, this is essential to keep the volume intact.

Now start from the top in the edge section, the push and pull values determine how stiff an edge should be when it becomes longer or shorter during the simulation. A higher pull value will try to maintain the original length when an edge is stretching.


The push will do the same when an edge is shrinking. A higher value will give more weight to maintaining the original shape.
In my settings, I kept the Pull to 0.9 and the push to 0.95.

The damp setting will determine how much each edge acts as a spring. The higher the value, the more it will resist acting as a spring. I ended up putting this to the maximum value of 50. This will help the lattice maintain its thickness.

The plastic will determine how likely the object is to take on a new default shape while the simulation is running. I put this on a low value like 1 or 2 and it works fine. We don’t want the object to maintain its original shape while sliding over our collision object.

The bending is an important setting. Without bending, the shape will collapse. It needs to bend back towards its original shape while slowly adapting to the environment. I set this to a small value at 0.05.

Last, enable the collision edge setting. The lattice seems to behave differently than regular mesh objects in soft body simulation. This fixes some issues with collisions.

If you have not done so already, add a lattice modifier to the cloth object and set the lattice with the soft body simulation as the lattice object in the modifier.

Collision object

Now we will continue by building a low poly collision object that will represent the environment we will later put the object into. Here, we add a plane, subdivide it once and extrude one face upwards. This extrusion will represent the side of a bed or similar.

Position the lattice and the cloth object above the middle of the collision object. Keep in mind that the lattice will slide off the elevated part of the surface thanks to friction and the weight of the lattice so keep more of the lattice above the elevated part than you want to stay on it in the finished scene.

Set the object we created to a collision in the physics tab. The setting we will need to keep an eye on is friction in the “soft body and cloth” section. We will most likely want to increase this to decrease the sliding effect common in these kinds of simulations. My final friction value was 25.

creating low poly collision object

Blender cloth simulation test

Now it is time to test our simulation in Blender. In the timeline hit play(shift+space). This will take longer than our cloth simulation but within 100 frames we should have a handful of frames to choose from.

  • Apply the scale to all objects before starting the simulation.
  • Check so that the lattice and cloth object share the same origin location.
  • Tweak the soft body parameters. If the object seems to light, try to play with the mass value in the object section of the soft body simulation. For me, while testing I have had this value anywhere between 0.5 and 4. Also, adjust the friction in the same section if you are experiencing a problem with sliding.

Continue tweaking the soft body simulator settings. Blender can sometimes just bug with the collisions, and you may have to recreate the collision object if you can’t get the lattice to collide properly.

There might seem like a lot can go wrong here and sure it can, but after some testing, I rarely have problems that was not solved by recreating the collision object. That seems to be a bug I could not work around, or there might be some setting I accidentally change sometimes that I overlooked.

When the soft body sim has run, we cannot apply this modifier to the lattice. The modifier must stay in the stack. We can bake the simulation to prevent loss of data after the simulation has run once but what this means is that we can’t directly manipulate the lattice manually after a run to make small adjustments.

What we can do instead is to add another lattice and deform our soft body lattice with a second one using a lattice modifier. Kind of latticeception if you will. We can also use a vertex group for more granular control.

If you want to close the gap between the floor and the cloth object, you will most likely need to do this lattice on a lattice wombo combo. The good news is that it is not as cumbersome as one might first expect.

Once you are happy with the shape of the cloth object, apply the lattice modifier on the cloth object to make it a standalone mesh not depending on other objects any more.

Creating the material

To create the material, we will use cycles and rendered preview mode. Use ctrl+b to box select an area in the 3D viewport for rendering. Clear the border with cltr+alt+b.

Setup some light in the scene either by using an hdri or add in any other light source of your choice. An hdri has the added advantage of an environment to reflect. Set one up by going to the shader editor, go to the world submode and add an environment texture. Browser for your hdri and connect the node to the background node colour input.

For free hdri images you can visit

With the node wrangler add-on enabled, hit ctrl+t after we select the environment node to add texture coordinates and mapping node for basic controls.

Select your cloth object and add new material in the shader editor or the material tab in the properties panel. Using node wrangler in the shader editor select the principled shader and hit ctrl+shift+t to add a texture set. Browse for a fabric material. If you have none, you can get free fabric materials from or Make sure you get the colour, roughness, and normal maps.

If you did not create a UV-map for your object, change the texture coordinate output to the object and set all the image textures from flat to box project. Set the blend value to 0.2 or similar. Just make sure that all maps have the same value.

Bring up the sheen value on your principled shader. This will bring in a more realistic cloth feel for the material. Adjust it to your liking. You can also play with the tint. The tint will tint the sheens reflection towards the colour input of the shader.


We used a lot of the tools in Blender to recreate an object to represent some thick cloth in Blender while maintaining detail. We went from modelling and using the array modifier to simulate cloth using forces and the cloth simulation. After that, we bent the shape to fit an environment using a lattice with the soft body simulation we then used to deform the cloth object.

As the last step, we created a basic fabric shader.

Things we didn’t look at is a way to create seams or the high details that are sometimes needed to make the cloth look exceptionally realistic. For these things, sculpting is most likely needed. But for a background prop or an object is slightly out of focus, this process can be just what you need. From here you can also add sculpting to increase the detail and realism even more.

If you like this kind of content please consider joining the newsletter so we can give you a head-up when something new arrives.

I hope you enjoyed the Blender cloth simulator.

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Eevee lighting interiors

Eevee lighting interior


In this article, the goal is to outline the key points we need to consider when dealing with lighting and reflections in EEVEE for interior scenes. For cycles, we had the luxury of plug and play with an HDRI. Often that is enough to get a good base lighting for a scene. In Eevee, it is not so simple. We will try to demystify the relevant check boxes and sliders that come with the benefit of a real-time render engine. 

Since this is the first time in history we have a ray-traced engine like cycles working with the same shader system as a rasterized render engine for real-time like Eevee. We do not yet know exactly how workflows will evolve in the coming years. Most certainly, workflows will be faster for shading, but the lighting in the two engines are quite different at least at this point.

Eevee lighting interior


Let’s list some main terms and tools that Eevee uses for lighting and reflections.

  • Light probes
    • Reflection cube map
    • Reflection plane
    • Irradiance volume
  • Screen space reflection
  • Ambient Occlusion
  • Shadow settings, both global and lamp settings
  • Indirect lighting settings and baking

These are the main terms and tools we will look at. We will not cover every setting for every area of interest. We will dive deeper into some areas and stay shallower in others. The main goal is to get a good starting point for interior lighting in Eevee and to combat some errors we may encounter.

The main problem we will encounter when working with interior scenes and Eevee is light bleed. This is when light leaks in at the edges of our interior even if there is no gap in the geometry. We may also run into some reflection issues and other artifacts we can solve with the settings and tools we will discuss. But not always. Sometimes we can need a little extra tweaking and intuition.

When lighting in Eevee, there are some restrictions to keep in mind we did not have in cycles. We do not use emission shaders on geometric objects. They can only make reflections right now. Instead, we use the good old trusty light objects. Another limitation is that we do not have access to node-based materials for lights as we do in cycles. We also do best in avoiding HDRI maps, since they may create light bleed at the edges of our interior. In fact, most light that comes from outside our interior scenes create light bleed. But we will look at a way to deal with that.

General settings

Let’s talk about some settings. By default, all the post-processing effects in Eevee are turned off. In the properties panel, go to render settings and turn on Ambient Occlusion and Screen Space Reflection. In the screen space reflection, tick the box refraction if you are planning on using glass shaders and remove the checkbox “half res trace” if you have a mid to high-end graphics card and your scene is not too complex. 

Go to the render settings; We will turn up the samples in the sampling section at the top. In my case, I usually have the samples of the viewport to 256 and for the final render at 512. This is to clean up the soft shadows as much as possible. For rendering animations, I may turn the render samples down to 256 as well if I am in somewhat of a hurry. 

Under the shadow settings, we will also make some changes to help us reduce light bleed later. Change the method from ESM to VSM, tick high bit depth and soft shadows. Set the cube size to 512 or higher for best results. 

The cubesize setting applies to all kinds of light objects except the sun lights where “cascade” is the shadow map type instead. When lighting an interior from the outside, however, a point light is less prone to giving us issues with light bleed than sunlamps, therefore we will omit using sunlights all together so cascade shadow maps will not matter in this case.

Keep in mind that all these settings are heavy on memory. VSM uses twice the amount of memory as ESM. High bit depth also doubles the memory usage of shadow maps and soft shadows need more samples to get rid of noise which requires more computing power from your graphics card. If you have a problem with a slow viewport after changing these settings, consider changing them only when preparing the final render. For a middle ground, use only high bit depth and VSM for now. VSM may have artifact problems when “high bit depth” is inactive.

Eevee lighting the interior

Lighting from the inside is usually not a big issue and seldom lead to lighting artifacts so if you can keep all lights within the room that will probably be a painless experience. The problems start when you try to light a scene from the outside. When lighting from the outside with Blenders default settings in Eevee you will most likely see artifacts in the form of light slipping through at the edges of the room. That is what we call light bleed. 

To combat this, we have a few changes to make. The first thing we should do is to add thickness to the walls of our room. If your room is set-up like a simple cube, you can add a solidify modifier and adjust the thickness and see how the light bleed have less and less of an effect. 

For best results, make sure that the only light outside the room comes from point lights, area lamps, and possibly spotlights. Use the point light instead of a sun. Also makes sure that your new point sun has these settings in its shadow settings.

  • Turn on shadows if it’s not already on
  • Set clip start to 0.1
  • Softness to 0.0
  • Bias to 0.001
  • Exponent can be left at 2.5
  • Bleed Bias set to 0.1
  • Turn on contact shadows
  • In the contact shadows set softness to 2.0

With these settings on a point light instead of a sun together with the general settings we did earlier we should be able to handle most artifacts as long as our walls have some thickness. 

To add some skylight to this, go to your world settings and instead of adding an HDRI to light the scene, stick with a background color to fill the scene with ambient light. For instance, tint the color towards a light blue and set the strength to somewhere around 4-10 to simulate some skylight. 

If you want to light without the directional light from a point light, acting as the sun, you can also light with area lights right outside the windows of your scene. Keep in mind it is important not to put the lights inside the wall or that can also result in light bleed.

Indirect lighting using irradiance volumes

So far, we have the direct light in our scene, but what about indirect light? In cycles, indirect light is calculated as we render. However, in Eevee, indirect light is calculated beforehand. To get indirect light, we use an irradiance volume. An irradiance volume is a grid of points that capture indirect light during a baking process. When the bake is done, the irradiance volume works as light itself and light the scene with the indirect light captured during the bake. At least in theory. 

The irradiance volume will use the closest point of captured light for indirect light. This means that if we have capture points outside of our interior or inside the walls or other objects in our interior, we will capture light either from the outside or from inside one of our interior objects. In those cases, we will have different artifacts depending on our light setup because if a capture point of our irradiance volume is just outside a wall it will cast indirect light coming from the outside on the inside of our wall. 

It is therefore essential that all our capture points are inside our interior, capturing the indirect light we want to capture and as long as we do, it might help to think of them as lights.


To add an irradiance volume, hit “shift+a” and go to light probe. There you will find irradiance volume. You can move, rotate, and scale the volume just like any other object. Position it so it fits within your interior. If your interior had an L-shape or other shapes that a single irradiance volume can’t occupy, then add as many volumes as you need and put them into place. The dots capturing the light should not be overlapping but place the volumes so that together they form a continuous even grid for the entire area where you want to capture light. 

To change the number of dots in an axis of the irradiance volume, go into the settings and change the resolution. Fewer dots will cause fewer problems and even light. I usually go with about 1 dot per 1.5-2 meters of space. Sometimes fewer. Now, let’s bake the indirect light. Go to the render settings and find the indirect light section. Hit “bake indirect lighting”. This will also bake any reflection probes. More on those later. 

When baking is complete, you can preview what each point has captured in the general render settings. Go to the indirect lighting section and find the display subsection. Tick the eye icon next to the slider for irradiance size and increase the size of the preview. Just right above you can preview reflection probes.

If you experience problems after the bake, maybe one or more of your dots may be inside one of your furniture or other objects inside your scene. If this is the case, put those objects in a separate layer and disable them for rendering in the outliner. Try to bake again and bring the objects back after the bake has finished. The objects will still get indirect light cast onto them, but they will not cast any indirect light themselves. This workaround will most likely be good enough for most scenarios if you can’t adjust the position of your irradiance volume probes. 

Sometimes several probes for an area can create light bleed. Therefore, if you have light bleed with no clear reason, try to decrease or increase the resolution of the irradiance volume one or two steps in either direction and bake again to see if you get rid of the light bleed.

Windows and light

For windows and glass materials, if we don’t need the reflections on the glass, removing it completely is a reasonable way to go. If we want the reflections and roughness on the window, however, this is a usable node setup for the glass.

It is not physically correct, in Eevee, what is? It gives you some entry points to work with roughness and reflections of the window. To change how much or how little reflection you want, adjust the curve and to change the roughness, plug in any texture to the roughness of the glossy node or use the slider for a uniform roughness. When using a window, make sure we also turn it off while baking the light and cube maps. Also, keep in mind that the geometry for the window has some thickness to it and uses flat shading.


We have covered both direct and indirect lighting for interiors. Now it is time to think about reflections. We have a few options to work with. The first one is screen space reflection. It will reflect anything visible on our screen. If it is not visible, it will not reflect. For things are not within our view, we will need a light probe. The light probes concerning reflections are the reflection cube map and the reflection plane. We can set the reflection cube map to either a sphere or a box. For most interiors, we have rooms that are square shaped and therefore we use the box alternative for most of our interiors. For outdoor scenarios, a sphere will be more likely to work. 

Screen space reflection need not be baked. It is the primary means for us to get reflections in Eevee. Just like any of the reflection objects, (cube/sphere or plane) it works for any material with a reflective property and will reflect anything we can see directly on the screen. Probes are the secondary means of reflection and will complement any reflection from screen space. The reflection objects, however, need to be baked. 

In the render settings, we can bake reflections independently from indirect light. This is useful if we need to remove or hide objects in our scene to bake the indirect light without issues and then bring the objects back for baking the reflections. 

Screen space reflection may be enough for some of our scenes but when we have objects reflected, that is not on the screen, perhaps behind the camera or around a corner or just behind an object is in the scene, the reflection probes become handy. They will make a mirror from their location and use that mirror as the reflection for any object within their range. It is not 100% accurate, but it gives us a close approximation. 

A reflection cubemap has a few properties. What we need to keep in mind is the distance. The distance determines what objects will be influenced and therefore reflects the data that the probe collects. Then we need to keep track of the clipping start and end. This is where the probe will start and end its collection of surroundings to reflect. The start can be important if we place the probe inside another object. In those cases, the “start clipping” could be adjusted to just outside that object. When dealing with interiors, the clipping should end beyond any walls to have the probe reflect anything within the interior. 

We will mostly use reflection planes for mirrors or highly reflective flat surfaces. In look dev viewport mode or rendered viewport mode, move the plane closer and closer to the reflective surface until it looks correct. At that point, the plane will reflect as intended. The distance value will determine how far away from the reflection plane a reflective surface could be to be affected. Normally you use one plane for every highly reflective surface you have. Also, scale the reflection plane slightly larger than the surface, rather than slightly too small.


We have taken a technical look at settings in Eevee that are important for lighting and reflections in interior scenes. These are some key takeaways to keep in mind.

  • Use ambient occlusion and screen space reflection.
  • Use VSM rather than ESM for interiors.
  • High bit depth and soft shadows together with a sample of 256 or 512.

Lighting from the outside is prone to light bleed. These settings on a point lamp together with a light blue ambient world color are a good starting point for daylight lighting from the outside.

  • Turn on shadows if it’s not already on
  • Set clip start to 0.1
  • Softness to 0.0
  • Bias to 0.001
  • Exponent can be left at 2.5
  • Bleed Bias set to 0.1
  • Turn on contact shadows
  • In the contact shadows set softness to 2.0
  • An irradiance volume stores indirect light pre-baked.
  • It is important that the individual sample points of the irradiance volume are inside our interior scene.
  • Use transparent shader instead of glass shader for windows
  • Screenspace reflection is a good start for reflections but reflection probes will help fill in the spots we can’t see directly from the camera.
I hope you learned something new about Eevee lighting.

Much of the information in this article come from this thread on

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Other content you may like

Physically based rendering, blender nodes, with brick texture example

Showcase image for pbr nodes with brick texture example

In the last post, physically based rendering and Blender materials, we looked at how the principled shader really works. We lay the foundation for our future material creation in Cycles and for Eevee when Blender 2.8 finally arrives. Here, we will take a close look at how this is all implemented in the node editor using image textures to power the principled shader. This setup will then be supported by various other nodes to give us a system to work with when layering different materials on top of each other.
If you missed the previous post here is the summary and key take away points when using the principled shader in Blender.

  • You should use non-color data for all your textures except base color for both metals and nonmetals
  • For metals, keep your color values in the lightest 30% srgb color space in the base color map.
  • For dielectrics keep the color values above the 10% darkest and below the 5% lightest for the base color map.
  • In most cases, the metallic input is either 1 for metals or 0 for dielectrics. Seldomly much in between.
  • When the metallic input is 1, the specular has no effect. The specular is instead calculated from the base color.
  • Roughness is the most artistic map, use it to tell the story of your object
  • The normal map is angle data for outgoing light rays and not height information.

For any material that we can power with a set of image textures that are prepared for the metallic workflow, the system that we will discuss here will work very well and be very efficient.

If you want to follow along you can read the next section or skip to the “Brick texture and concrete combination” section to get right into the good stuff.

Setup the brick texture example

For demonstration and testing we will be using a set of image textures that is provided here:

They are provided under the cc0 license originally from The hdri is also licensed under cc0 and collected from

For this workflow guide, we will be adding a sphere, and UV unwrap it with the sphere projection option while viewing the sphere in orthographic view from the front by hitting number pad “5” followed by “1”. Also, add a material slot and name it.

showing the uv map preparation

Position the camera and set the resolution to a square like 1024 by 1024 or maybe 1920 by 1920. Move the resolution up from the default 50% to 100%.

Next, we will enable the node wrangler addon by going to user preferences. “ctrl+alt+u” for the keyboard-oriented Blender artist. Go to the addon section and start typing “wrangler” to filter the list in real-time. Check the box next to the node wrangler addon, hit “save user settings” and we are set so far.

Last preparation will be to add the hdri image. Go to the node editor, select environment material on the earth icon and add an environment texture node. Browse for the image and add it. If not already selected, select the environment texture node and hit “ctrl+t” This will add a texture coordinate node and the mapping node. This is a function of the node wrangler addon. Using the z rotation in the mapping node we can now rotate our hdri.

Enough with the boring setup stuff, let’s get on with the show!

Brick texture and concrete combination

Our example will be a brick texture where we want to introduce patches of concrete where the bricks have fallen off. We will also add some dirt. Each layer will be added in a slightly different way because of their role in the full material. What we need to remember here though is that this is not a guide to create a brick material. We are here to learn a highly customizable and flexible system for creating materials that we can use repeatedly for most materials.

The image below shows the basic setup for a dielectric PBR material with the standard maps.

  • Color
  • Roughness
  • Normal

We can add this setup quickly by selecting the principled shader and hit “ctrl+shift+t” and select all the maps that we need for the material. If the maps are named properly, the node wrangler addon will set up the rest for us like this. We don’t have to worry about what image textures should be set up for color or non-color data, the normal map will get its corresponding normal map node etc. If we have other maps, like a metallic or specular map those will also be added incorrectly. A displacement map, however, will be added for the displacement input of the material output node. We can, however, skip the displacement or combine it with our normal map through a bump node like this.

Our brick material is done. The first combination will be with a concrete material. Start by duplicating the principled shader and use “ctrl+shit+t” again and this time select the color, roughness and normal maps for the concrete material. The new concrete will be added above the bricks in this example.

At this stage, we will combine the two materials with a mix shader and get a very ugly mix between the two. Instead of blending the materials we want to tell what material goes where and for this we will use a mask. A mask is just a black and white texture. We can use any image, procedural texture or combination to create the mask, but we will go with the simplest possible and use the procedural noise texture node as the mask. Add it, select it and hit “ctrl+t” to automatically add the mapping and texture coordinate nodes. Then connect the noise texture to a color ramp node before connecting the color ramp to the fac input of the mix shader. This is what it looks like.

To tune in the mask, it is easier for us if we set up Blender with a layout like the image below. We have a rendered view with the render border active to minimize the screen area we need to render. Create it with “ctrl+b” and clear it with “ctrl+alt+b”. To the right, we have the nodes we need to work with available as well.

Use “ctrl+shift+mouse click” on the color ramp to create a temporary view of what the node outputs. To reset the view back to our material “ctrl+shift+mouse click” on the mix shader which is the last shader node in our material chain before connecting to the material output node.

Bring the two flags of the color ramp close to each other to create a high contrast map. Also, change the details value of the noise texture to 16 to create a more natural border between black and white. Now “ctrl+shift+mouse click” the mix shader to preview the mix we created. From here only tweaks remain until you are happy with the result. For me, I went ahead and inverted my color ramp and set the scale of the noise texture to 3.

This combination of materials was easy enough. It is the basics for combining any PBR materials in Blender. From here we can take any principled shader and all the nodes connecting into it and group them and throw the group into any other cycles material for combination with other materials. All we need is a set of three maps for each material and a mask to tell what material goes where.

Leaking effect

Next, we will look at how we can combine this with a material that needs a very specific placement on top of our other materials. We will add some leaking effects that should start from the top and fade out as it comes further down our sphere. We will require a new specific UV map for this effect as well as a mask that masks out the exact area for the effect. In this case, we are lucky enough that a mask image is provided so we will use that. However, this is not always the case and sometimes you will have to create your own or tweak an alpha channel and use that as a mask. We will look at how to do this as well.

Let’s start with adding a new principled shader and import the leaking set of textures. Then add a mix shader between our existing mix shader and the output node. Connect the leaking principled shader at the empty slot and we should get this.

Now delete the texture coordinate and mapping node for this newly added material and add the UV map node. With this, we can specify a new UV map. Though we will need to create it first.

If you are not very familiar with UV maps, just follow along. It won’t be a very hard process. Remember, this is a system that should be easy, right? It may seem daunting right now but trust me, it is the same operations over and over with slight tweaks and adjustments. You already know the basics.

Creating the leaking UV map

Start by bringing in a uv/image editor and in the properties panel, go to the object data tab that looks like a triangle and click plus in the UV maps section. This will add a copy of our original UV map. I chose to rename mine to “leaking”. Fill in the name in the UV map node that we added previously.

If we select the new UV map we can alter it. We can scale down the parts of the mesh that should not have any leaking effect and hide it in a black area in the image and the parts that should have the leaking effect is now adjustable with pixel precision inside the uv/image editor.

My leaking UV map looks like this for now since I want the effect of most of my sphere and the top and bottom will not be visible in the image.

And now the mask

Back in the node editor, we will add some nodes for our mask image that we happened to have in this case.

You can see that we have the same concept as before, only this time we have an image mask instead of a noise texture and therefore also UV coordinates to power it. The color ramp is adjusted based on the images values. The important things to remember here is to use “ctrl+shift+mouse click” to preview the color ramps output and adjust it accordingly. In this case, I did not want the mask to go from complete black to complete white, so I darkened the white a bit so that the underlying material would come through slightly.

If we needed to use the alpha mask instead we would use that instead of the mask and the node editor would look like this.

We can also just use the image color itself and collapse the black and white range to generate a mask. It could look like this.

Note the shift from using the alpha output from the image texture to the color output since we may not have an alpha channel to work with.

At this point, we have arrived at additional tweaking. Usually, I tweak each material to look the way I want it right after adding it and before adding in the next one. However, I figured that it would be easier to follow if we left it for the end.

Individual material tweaks

We are going to look at a couple of ways to add flavor to our material before we go to the summary. The most noteworthy tweaks is introducing some color variation to the individual materials and a way to create a more distinct border between our bricks and concrete. We will also tweak the roughness.

Let’s start by adding some color variation to the bricks. Zoom in to the part of the material where the bricks live and look at the color map. To introduce some variation, we will have to first create the variation and then mask where the variation should be applied and where the original color should live. We can do this in a pretty similar way to how we have done it with the mix shader earlier but this time with a mixrgb node. The mixrgb node will serve as the mixer. Though we still need something to mix and a way to mix. Add a hue/saturation/value node to generate a slight variation to our texture. I will set mine to these values.

  • Hue 0.48
  • Saturation 1.2
  • Value 0.8

Now we have a fac input available in the mix shader where we will use the same trick as before. Combine a noise texture with the details set to 16 with its corresponding texture coordinate and mapping node using “ctrl+t” while selecting the noise texture. Then add a color ramp between the noise texture and mixrgb. Collapse the color ramp to bring it to the black and white mask that we want.
This is how I ended up setting up the color variation for a very slight difference in color across the surface.

You have quite a lot of parameters at your disposal to get the noise the way you want. You can adjust rotation, location and scale for any single axis in the mapping node to get a stretched or just different effect. You can also try using object or camera coordinates to generate different noises. You can also try to add more flags to the color ramp and play with those values to have complete control over the transition between light and dark.

Creating a more distinct transition between bricks and concrete.

For this part, we will drive our effect from the mask separating the bricks from the concrete and then feed it through a bump node that we combine with the existing normal map data in the concrete material. Look at the edge between the bricks and the concrete in this image to see what effect we are after.

If you have ever used a 2D image manipulation program like photoshop, gimp or affinity photo you know that you can select part of the image and have the marching ants show the way, right? We will kind of do the same here now, but we will mathematically tell Blender what parts we want to select, again using masks. Right after the mask dictating over brick vs concrete distribution add an invert node to invert the mask and then feed it through a new color ramp. These nodes should not have anything connected to their outputs right now. Instead, hit “ctrl+shift+mouse click” to see what the color ramp output looks like.

Move the black flag of the new color ramp towards a position of 0.6 or 0.7.

Add a new mixrgb node and set the blend mode to “linear light”. Connect the new color ramp in the top socket and the color ramp from the first color ramp in the second socket. Then add another color ramp after the linear light mix node. It will look like this.

Bring the white flag of the new color ramp to about position 0.2 to collapse the gray tone that the linear light left behind.
Now add a bump node in the concrete material between the normal map node and the principled shader. Then connect the last color ramp in the chain to the height input of the bump node and the effect is done.
We now have something like this.

A material is never finished. We could continue to add color ramps between the roughness maps and their corresponding principled shader or we could add variation to the leaking texture color for example. There are many possibilities.

Summary of the brick texture example and the system

The big takeaways from this article is not the bricks or the concrete but the flow of nodes. This way we can easily take a material or a mask and create a group out of it and have a very easy and flexible system to work with in order to layer different materials on top of each other. We can also present the material much easier. Take a look at this for example. Here I have created groups by selecting nodes and using “ctrl+g” to group them and renamed the groups in the “n” panel. Using this way of creating materials gives you a very good way to reuse groups of nodes.

This is a very solid foundation for building your materials and it is also compatible with the upcoming Blender 2.8 version and its real-time engine Eevee.

Blender selection tools short tutorial#3

In Blender there is a variety of ways to select different parts of a mesh. However, it’s not always clear how we can combine these to make a more advanced selection. In this tutorial we will start by looking at some of the basic stuff. Then we will dive just a slight bit further and learn how we can combine some of these tools to make quite advanced selections in a short amount of time instead of manually selecting with just a single basic tool at a time.

We will be talking about how we can select multiple edge rings and edge loop or for instance every second edge loop on a cylinder. Also, some tips about what to avoid or be careful about when selecting in a path.

I think these tools and techniques can help you even if you know the basics already. Some tools are just many times better in combination with each other rather than as separate tools. Hope you will find some new knowledge.

Thanks for taking your time with this tutorial and hope you learned something. You can also check out other tutorials or contact me through the contact section on the site as well as social media.

Physically based rendering and Blender materials

Physically based rendering and blender materials

Physically based rendering in Blender has been a guesswork for some time. With 2.79 however comes the principled shader. It will help you to create accurate blender materials for cycles. However, there still seems to be some confusion on how it works. Let’s get a closer look at it and nail Physically based rendering once and for all.

Physically based rendering or PBR for short is a way for ray traced render engines such as cycles to accurately describe a material. It lets the artist focus on the artistic side to a larger extent and leaves out how to deal with more technical issues. This information is a set of guidelines and is not written in stone. It is here for you to get an idea of how PBR works with the principled shader. Once you understand it you can break and bend the rules to your will, but a good foundation to start from is better than to dive in without knowing how the shader reacts to different settings or texture map inputs.

Two different workflows

PBR can be divided into two different workflows. Within the realm of Physically based rendering you use one or the other. They both give the same result in the end. The difference is the texture maps that are used to describe the final material. These are the two workflows.

  • Metallic/Roughness
  • Specular/Glossiness

The new principled shader is geared towards the metallic/roughness workflow and that will be our focus here. It is the most common workflow but you should be aware that there exists another one as well. However specular/glossiness is not natively supported in Blender by any one single shader.

Physically based rendering texture maps

When dealing with the Metallic/Roughness workflow we have three specific texture maps to work with.

  • Base color(Diffuse, Albedo)
  • Metallic
  • Roughness(or inverted glossiness map)

We also have a few that are also shared with the specular/glossiness workflow. They are the following.

  • Normal
  • Height/Displacement
  • Ambient occlusion(AO)

For most materials, we are concerned with the first set of maps and the normal map. The height and AO maps are optional. With these four maps we will be able to create a lot of the materials that we see around us every day. Before we talk more about maps however, let us first take a brief look at fresnel followed by color spaces.






Fresnel and specular

What is fresnel? First of all the pronunciation is with a silent s. Second, it is determining the falloff of specular from the viewing angle of 0 degrees to the edge of an object. Fresnel at a 0 degree angle is also referred to as F0. This is a property that all materials have.

The most clear example of this is when you are standing at the edge of a calm lake and you are looking down and you can see through the water clearly, but when you look in the distance the water becomes more and more like a mirror. This is fresnel and when viewing at a 0 degree angle the specular varies heavily depending on if you are looking at a metal or a non-metal material.

If we are looking at a metal straight on the F0 is between 70-100% depending on what metal we are looking at. For non-metals, this value is between 0% and 8% in most cases. At the edge of the object the specular is almost always at or near 100%. In the case of looking at a sphere it becomes clear. The edges will become more and more specular as its surface turns away from us.

Principled shader used with a dark red #531D21 base color with a roughness of 0.2 lit by an hdr from

Linear color space vs SRGB

Let us deal with the color space stuff now. The computer stores images in linear space. This means that there is an equal amount of color change between every shade from black to white. However, this is how the computer reads data, our human eye does not see color this way. So in order to save space and not send data to the screen that humans can’t see, we encode the data to give more space for color information in the ranges that we can see. This encoding is also called gamma correction.

So how do we see color? Well, what you need to know is that the monitor outputs color in SRGB color space. SRGB color space is adjusted so that we make use of more color space in the ranges that we can actually see. This means that before the image is sent to your screen the computer encodes the linear image to SRGB so that we can see a richer image.

Top gradient shows the linear color space as humans see it. This is then encoded to srgb for a more smooth gradient.

In Blender, when we add an image texture to our cycles material we have the opportunity to decide if the image should be converted to srgb before the node sends the image away as input for the next node to interpret. This is the default behaviour. However if we change the image drop down menu from color to non-color data we tell the node to not gamma correct the image, just send it over to the next node as it is in linear space. This is useful because all the maps that are not base color should be non-color data. They are not there for our eyes to see but for the shader to know what parts of the material does what. It is there for the computer to read, and the computer reads color in linear space.

The general rule becomes: Set all your texture maps to non-color data except the base color.

If you want to create the encoding yourself in Blender you can use the gamma node and set it to a value of 2.2. It is not exactly the same as the srgb encoding but it is so close that you probably can’t see a difference.

Metals and the principled shader

Now when we know a bit about color spaces and fresnel we will continue on with the inputs of the principled shader.

If we deal with a fully metallic material we set the metallic slider to 1 and no texture map is needed. Same goes for materials that is non metals. Set the slider to 0 and you are good to go. For any material that requires a combination you will need a texture map to tell Blender what areas of the material is metal and what is not. This map should be grayscale set to non-color data.

You can also set this to any value in between 0 and 1 but that will create a material that does not exist. Sometimes this can be useful though. Imagine that you are creating a metallic surface that has a dust layer on it, then this value can be tweaked to something in between to simulate the dust. It works but it is not accurate. You can also use gray values to blend between a metallic and dielectric material where their edges meet.

The metallic input also dictates how other parameters of the principled shader behave. When the input is set to 1 the specular slider has no effect. Same goes for specular tint. Changes to these inputs makes no difference. Instead, the specular data is coming from the base color map. I will repeat that. Specular data for a metal using the metallic/roughness workflow in a physically based rendering scenario will come from the base color map.

The base color map will still be set to color data even if it is used to determine specularity of metals. It is still also determining the color or specular tint if you will. Metals has no diffuse aspect to it so it makes sense to switch the behaviour of the base color input for metals. After all, it contains three times as much data then a grayscale map.

Most real world metals has a reflectance value between 70-100% at Fresnel 0. For us that means that the value of any pixel in our base color map should have a value of around 0.7 or higher. If we have a color texture map as input it means that the pixels should be in the brightest 30% in srgb color space.

If we input a color ourselves using the color wheel widget to set a solid color we will have to look at the hex values. The rgb and hsv set of sliders are in linear space so the values won’t be correct but the output will. This is not very convenient however so i usually stick to the hsv sliders and for metals I don’t let the value slider go below 0.7. The accuracy of this, well, good enough for me.

If you want real consistency however you should look up the correct colors for any given metal that you want to recreate.

Dielectrics and the principled shader

Now let’s move the metallic slider to 0. We are now in the realm of dielectrics or non-metal materials and the specular slider is in full effect!

However, we are in the metallic/roughness workflow and the specular input here works very different from the specular/gloss workflow and how we used to work. The input slider goes from 0 to 1 but you can set it to values higher than 1 by typing in the value. This 0 to 1 range is mapped to 0% to 8% specular. In the default setting of 0.5 we therefore have 4% specular. This is the most common range of specular for dielectric materials and in most cases the default 0.5 value does not need to change. Not very exciting. The slider is more of an artistic tool that you can tweak to squeeze some extra “oh yeah!” out of your material. It is not meant to create a 100% specular metallic. That we have the metallic slider for and when working with metals the specular is in the base color as we have learned.

Over to the base color. Now this value has nothing to do with specular for non-metals and is pure reflected color. Since we don’t want this to contain light or shadow information we should not let the image contain pure black or pure white. Try to leave the darkest pixels about 10% lighter than black and the lightest pixels about 5% darker than white in srgb color space for your base color map.

So, this leaves us with the roughness and normal maps.

Normal map and roughness

Now we will leave the realm of technical terms and head into the artistic mist of “it depends”, “tastes” and other interesting stuff that can’t be defined. Don’t be fooled though, we are still talking about pyhsically based rendering.

We start with the roughness map. This is the most artistic map and can be used to tell the story of your object. You can add scratches, dust, fingerprints or water vapor just to name a few. There is no real rules here other than experimenting and making the best combination of properties that will tell the tale of what the object has been through. It is a grayscale map that will determine no roughness at black or 0 and full roughness at 1 or white.

The normal map is often mistaken as meaning height information. But the normal map actually contains angle data. It determines the direction in which an incoming light ray will bounce of in. The result is however similar to height information in that it simulates geometry changes. It is another artistic map that helps to create more detail in an object that is way more efficient than using real geometry. Pipe it through a normal map node and into the normal input of the principled shader to use it. It can also be combined with height information from a height map using the bump node.

The other inputs

That is a mouthful of physically based rendering and blender materials with the new shader. Let’s take a quick look at the rest of the inputs before we move over to the summary. These are specific for special types of material like skin, carpaint, fabric or glass.

We start with the sheen and sheen tint. Sheen tint only has an effect if sheen is not 0. It is intended to help simulate cloth. It adds a soft white reflection around the edges. The sheen tint mixes in the base color into the reflection. This is useful when trying to make a cloth material.

The Anisotropic input is used to stretch the reflection of an object. Think of a brushed metal with a circular pattern such as the underside of a frying pan. Instead of having a normal map or geometry to simulate the circular pattern that gives it the stretch you can turn this input up to simulate it. The Anisotropic Rotation dictates the reflections rotation and at the bottom of the shader you have a tangent input that also can be used to affect the rotation in a more precise manner by inputting vector data. Both the tangent and anisotropic rotation hs no effect if anisotropic is at value 0.

Next we have the clear coat, clear coat roughness and clearcoat normal inputs. These are here to add an extra layer of specular on top of the material. Think of car paint that has these deep reflections. The clearcoat roughness is there to give this layer its own roughness and same goes for the normal. In a lot of cases the clearcoat normal is there so that you can input the same normal map into that input as you ordinary normal input. But in very rare cases you may want to have different normal maps for the two layers. Same thing here, the clear coat roughness and clear coat normal has no effect if the clear coat input is set to 0.

The IOR input only has effect if used together with the transmission input. The transmission input allows you to create glass and ice materials with the principled shader. The IOR will dictate the change in angle for lightrays going through the object.

Lastly you have the subsurface scattering(SSS) inputs that helps to create, you guessed it, subsurface scattering. It uses a different method for calculating subsurface scattering than the older SSS shader so the results will differ slightly from the regular SSS. But it is here to make sure that we can use this one shader to create the largest amount of materials possible without having to use different shaders and combinations. You also guessed right when you assumed that the subsurface radius and subsurface color has no effect if the subsurface input is set to 0.


There are of course other areas to consider as well like lighting and post processing. You should also use the filmic color management in Blender to make sure that you have a wider dynamic range available for more realistic renders. If you find any errors, please contact me to let me know so that I can make a change. Physically based rendering is important not only for realism but for consistency as well.

Anyway, what are the important values to take from this? 

  • You should use non-color data for all your textures except base color for both metals and nonmetals
  • For metals, keep your color values in the lightest 30% srgb color space in the base color map.
  • For dielectrics keep the color values above the 10% darkest and below the 5% lightest for the base color map.
  • In most cases the metallic input is either 1 for metals or 0 for dielectrics. Seldomly much in between.
  • When the metallic input is 1, the specular has no effect. The specular is instead calculated from the base color.
  • Roughness is the most artistic map, use it to tell the story of your object
  • The normal map is angle data for outgoing light rays and not height information.

Below is a list of links to some of the sources for this article. If you want more, check out our other articles and tutorials. A handful is linked below.

The ultimate reference photos workflow in a nutshell

Imagine that you have just decided what is going to be your next 3D project and you are thinking bout where to start. Well reference photos of course. You should always start with reference and you should keep them around through your whole project. Pinterest is a great tool to help you sort and organize your reference images and in this article, we will walk through how we can use it together with Kuadro and Downalbum to get good control over the reference we chose to use.

Pinterest is a kind of social media platform that is not very social at all. But it is a very good way to keep track of and sorting images that you find across the web or that images that other people have already pinned. Pinning is just a word that Pinterest use to say that an image has ben saved to a board. A board in turn is a folder that is ether public or private.

First off, creating an account. Go to and you will immediately be presented with a form to create an account. You can ether enter an email and password or login through an existing Facebook or google account. Personally, I always use the e-mail approach because if I ever have problems with one of my other social media accounts my account for the given web service, like Pinterest in this case will be a separate stand-alone account that I still will be able to access.

Create pinterest account

Once your info is entered you will have to confirm your email address, or not if you chose one of the other methods and then you will be ready to start. Pinterest will first ask you a little bit about what you like. Kind of like a wizard to walk through to get your account started and filled with some content to show on your front page. Once inside click on your name in the top bar.

Pinterest header bar

Here you can see that you have the option to create a board or a secret board. A secret board will only be accessible to you and no other people on Pinterest can see or use them. This is usually where I start but then I might turn a board into a regular shared board once it has begun to be populated.

Now we can start to collect our reference photos. We will start by staying within Pinterest and search for references that other people have already pinned and shared. For example, I have been interested in making a scene with a medieval or older bridge, so I start with those search terms. When you find an image that you like you just hover the mouse over it and click save. You will then be prompted to choose the board that you want to save this pin on. If you have multiple boards the board will then be bumped to the top of the list after a pin, so you don’t have to find it for every pin you make. Keep on trying search terms related to your subject and you will son have a well populated board of images related to your subject.

Pin image brdige

A few tips on searching for reference photos

With the medieval bridge as an example I might want to search for bricks to get good closeup images of bridges to add to the board. I might be inclined to use words like fence because most bridges have a fence or railing to hold on to. Keep narrowing down the search terms to individual pieces. You can also search for the materials that those pieces are made of. I might want a stone bridge with a rusty metal railing. Maybe I can find a good-looking balcony that can help me with that railing?

When your board is getting filled with enough reference photos. Maybe 50 pins or above depending on your project of course, you can click on your profile image / name in the top bar again and select your board to view it in all its glory.

Now if you want to pin images from another sources Pinterest has a great browser plugin. Go to this link( and chose your browser to get the instructions on how to install the plugin. When it’s installed it may work a bit differently in different browser. For instance, in Chrome you get a save icon whenever you hover an image anywhere on the web. Click it and chose your board. Simple as that. You can also click the Pinterest icon in the browser header to get a listing of all the images on the current page to easier find and pin multiple images from the same site.

Now that is the basics of using Pinterest as a tool for organizing reference images onto boards. One of the downsides of Pinterest though is that you can’t rearrange the pins inside the board. They will be added in the order you pin them. To combat this, we will now investigate how we can download an entire board and then use a program like Kuadro or PureRef to view our reference in a customized organized way.

Download and view a board on your computer

The software that we will need to follow along is the following

Chrome you probably already got, Downalbum is just a button to click to add the extension next to our already added Pinterest extension. Kuadro in turn is just a download and start and it will run as a tray icon. No installation. We will assume that you have downloaded and installed all the above software.

To start off you use Chrome to browse to the board you want to download. Next you will use the DownAlbun by clicking on it’s icon. It will become colored if the site you are on is compatible. Chose “Normal” in the interface that comes up. Then click output after a few short seconds depending on how large your board is. Now you will be prompted with the pinned images in a different interface. At the top it says press ctrl+s… We better obey. You will get prompted to save an html file. Name it to something suitable or leave it. Wherever you create this file a subfolder will be created with the same name as the file with a “_files” added to it. Click the up arrow next to the newly downloaded file and chose to open in folder. The subfolder will be inside containing all the downloaded images. We will also have a file with the extension .download and one with extension .css. You can delete these files as well as the html file. The board is now downloaded.

Downalbum Icon
Downalbum output
Downalbum save image

Now open Kuadro. It will be run, and a tray icon will be added down by the clock. Click it add select “Add local image” Browse to the folder of the downloaded images and select them all. Hit open. They will be stacked on top of each other so start dragging the top ones around to view the ones below.

You are now ready to arrange the board on your desktop. Perhaps on a second monitor. I find this to be a good workflow to get your reference photos arranged well both online and locally on your hard drive as well as viewable in a nice predictable way.

Kuadro is a very nice software to display reference photos. Click the tray icon and chose about to learn more about how it can be used to resize, pan around and rotate the images as well as some other features. The shortcuts I use the most are listed here.

  • Click and drag to move
  • Hover corners of active image to resize it’s canvas
  • Mousewheel to zoom
  • Middle mouse click and hold to pan if image is zoomed
  • H or V to flip the image, sometimes this feature gets stuck, zoom and move the image a bit usually resolves this.
  • G for grayscale
  • Hold T and drag with the left mouse to decrease or increase transparency of the image
  • Right click on image for menu
Kuadro logo
Kuadro menu

Blender Shear Tool Short tutorial#2

This is the second tutorial in our series of short tutorials that cover one specific topic, tool or addon. This time the Blender shear tool. This tool is very useful for architectural modelling where we need to create nice angles. For instance, when you create a profile for a doorframe or window frame this tool will help a lot with creating the corners in an easy way.

In Blender the shear tool will take a profile that you have created by extruding a set of edges along an axis and make nice angles for them. The shear tool has a very weird shortcut key, it is alt, ctrl, shift + s. Almost like the cousin for the convoluted alt, ctrl, shit + c that is commonly called the claw grip. Only one finger to spare! That is more like a bunch of keys than an actual shortcut. Anyway, the tool is very useful, and it is covered in the video tutorial below. Enjoy!

Earlier tutorials in this series include the F2 addon short tutorial where we decode the three functions of this addon that is built into Blender. It is a very useful tool that can help you improve the speed at which you are modelling

If you find this useful you can head over to our Youtube channel and subscribe! That would mean a lot to me. Also contact me at the social media channels where I hang around. Links at the top of every page. And if you want updates on what we do here, consider joining the mailing list.

Blender F2 Addon Explained Short tutorial#1

Blender F2 Addon explained

The Blender F2 addon is a quite simple addon that extends the functionality of the F key in Blender. By default, the F key is a quite unintuitive tool to use. The F2 addon extends on its behavior by adding three functions.

  • First you only need to select one edge and hit F to add face to connecting edges
  • The second functionality is helping you control this behavior by letting you decide where to fill simply by positioning your mouse cursor.
  • The last function and my favorite one lets you create faces with a single vertex selected. The addon does this in a very predictable way in all its simplicity.

This tutorial is the first part in a series of short tutorial explaining simple tools in Blender for people who already know their way around in Blender but need that extra push to become more efficient in their modelling and with Blender. You can find the video tutorial below. I hope you find it useful and have fun with the Blender F2 addon!

If you are new to Blender or want to find out more about it you can go to the official Blender website. You can also check out our resource article “21 resources for artists you may not know about” for 3D artists to learn about more places where you can find awesome tools and content.

If you want to come in contact with me about this topic you can head over to the social media pages or comment below.

Share on facebook
Share on google
Share on twitter
Share on linkedin

21 resources for artists you may not know about

As 3D artists, we are always on the hunt for good resources. I like to add more and more bookmarks to my browser as often as I get the chance. Here I have tried to gather some resources that I think is less known or at least less talked about in the 3D artists community. The quality may vary. Have a look for yourself and see if anything interests you. The list contains both software and websites.


Let’s start with some web resource, all the resources are on the web but duh! You know what I mean.

First off, some texture resources. is a very well known site for textures. In the Blender community, we also got that was started by Blender guru. Both well known. have a limited number of downloads for their lower resolution textures as well as a high-resolution texture every day that you can download. Both sites have their own take on royalty free license but they are both similar with what they allow.

A couple of other resources that is cc0 and pretty impressive is first, the Chocofur material library. All their textures are free and cc0 licensed. You only need an account to download the entire library of textures. The second cc0 texture resource that I want to share is at They have a shared library of about 5GB of cc0 texture resources that can be downloaded.

Enough with the textures already let’s get on to some images. is pretty well known, they have over 1.1 million cc0 images now, not long ago I remember it to be around 600k and it has grown fast. Now there is some alternatives to this site that also follow the same or similar licensing. They are the following in no order.

Now for some places to gather reference images.

The above sites can be used for that but here is some extra. Pinterest is the most common place for finding reference today I think and it is currently not matched in my opinion. Just make sure you draw inspiration from multiple images instead of copying straight off. Anyway, for architectural rendering Houzz is an awesome website to get some nice reference as well. You can make an account and save images in idea books to organize them.  Just as a side note for any fantasy or character artists. I don’t know much of resources for that other than some brilliant games. But I am sure you already figured that out. Instead I want to point your attention to they don’t have any images but what they do have is a whole lot of generators. Sometimes you need a story for a character and a randomly generated text for a character may sometimes be a good place to start. Or why not generate a name or a weapon.

Ok I don’t really have any good HDRI resources to share so I will move on to some software that I use together with Blender. They can also be used together with other 3d packages as well I’m sure.


The obvious ones are complementary 2D applications Krita and gimp. I myself don’t use gimp but I use Krita quite a bit. No point for me to talk about those though sense most information is told and told again over thousands of times for those software. Anyway, I got some other stuff to share. First off is Kuadro. It’s a lightweight software for loading reference images. It’s great for resizing and moving images around on a second monitor to glance over at while you work. Another great place to look is at They have a a cool vector graphics app as well as an in-browser app for graphics design. Then again, talking about vector graphics and not mentioning Inkscape as an open source alternative would not be fair.

Before you get to work though you may need to do some prep work and sure, OneNote or Evernote can be good tools to gather information about what you will be making but I also want to give a little hint in the direction of Xmind. It’s a Mind mapping application that can be nice to use for fleshing out ideas or planning. Okok I got a few more to share. Next up is Rawtherapee. It’s kind of like a Lightroom alternative. It’s not as simple and uses a more technical terminology and the functionality probably doesn’t overlap 100% But it’s good for postprocessing of single images. Now some 3D guys out there like to photograph their own textures and for them, Digicam Control can be a cool software to check out if you have a DSLR for the task. It’s basically for controlling your camera from the computer. It allows for some pretty fine adjustments and control over settings.

Now we are coming to the end. Up last is the Sweet Home 3D software. I don’t know how well known it is but it got some nice functionality and you can export most of what you make in sweet home 3d to SVG or OBJ format that you can import into blender or other 3d package. For instance, you can make your own floorplan and ether export and import to blender or just take a screenshot and have the image as background image in blender as a guide to model your house. Ok I hope you found some new resource that you didn’t know about before that can be of help to you. If not, well at least I enjoyed writing about it.

Some other resources you may like

Blender Shear Tool Short tutorial#2

Got stuck when creating 3D art?

Our workflow cheatsheet will help you to know what to do next! Get it by subscribing to our newsletter!