Blender 2.8 basics part 4 – Transformations

Blender 2.8 has come with many new features and one of them is the reworked tools panel. It is it’s flaws and you will discover that it works quite different from using shortcuts.

Now we are finally getting into some of the fun stuff. We will start to manipulate objects, learn more of the common beginner hickups to avoid and just have some fun exploring Blender and transformations. Widgets are the new thing, however they are not fully ready yet so stick to the shortcut way for now. It is faster anyway.

Video is available below. Enjoy!

Blender is quite heavy on shortcut keys but with the start of 2.8 the tools panel has had some rework. Using shortcut keys and the tools panel differ in a few ways.

Just like in 2.79 we have the “t” and “n” shortcuts to toggle the tools panel on our left side and the properties panel on the right.

From the tools panel the tools now work so that you select a tool and it will activate on mouse hold and confirm as soon as you release while the shortcut counterpart uses the shortcut to activate and a full click to confirm.

Currently the manipulators in the viewport is only visible the corresponding tool is active. This is quite inconvenient since we probably most of the time want to have our box selection tool active. Just keep that in mind.

These are just some of the reasons why the shortcut keys are still, at least my, prefered method of activating and using tools.

The shortcut keys for moving scaling and rotating our default cube is “G” for grab, or move. “S” for scaling and “R” for rotation. To constrain the transformation to any given axis hit “X”, “Y” or “Z” for the corresponding axis for constrainment.

To constrain to two axis and enable transformations over a plane rather than an axis use “shift” plus the axis you want to omit.

When it comes to scaling and sometimes rotation it is important to note the difference of scaling in object mode versus edit mode. We have not covered edit mode or object mode yet but just know that we are currently in object mode. When scaling in object mode we have to be aware that we scale the object and not the geometry that is contained within.

This is usually a great source of confusion to beginners therefore just know that for now, whenever you scale in object mode also hit “ctrl+a” and chose “scale” to reset the relative scale to 1 on all axis after each scaling operation.

If you don’t apply the scale, you will run into trouble with tools later that will behave in unpredictable ways.

We will continue by looking at edit mode in the next part.

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Blender 2.8 Basics part 3 Viewport navigation

Here we start to take a look at the viewport and navigation in 3D space. It may be a bit different from other softwares in the 3D industry, however when you get a feel for how blenders navigation works you will fly through the viewport. Watch this video to get a head start or read on below. Perhaps both.

The previous part about editos and interface guided us through a common beginner misstake that usually ends up in a very messy interface. So make sure to watch that if you haven’t.

How to navigate. Rotate, pan, zoom & reset the view

The 3D viewport is the name of the largest editor, that happens to represent the 3D world. To navigate around the 3D viewport, use the middle mouse button. Click and hold to rotate the view.

To zoom the view ether, scroll the wheel on your mouse or hold control while pressing down the middle mouse button.

For panning the view, hold shift, click and hold the middle mouse button.

The 3D view supports both perspective view and orthographic view. In perspective view we view the 3D world as we normally do with perspective. The opposite of that is orthographic view. It lacks perspective and looks more like a blueprint view. Orthographic view is most useful when viewing an object from a head on angle. For instance, precisely from the top, left or bottom view.

To view the object in any of these modes use the number pad. Press 1 for front orthographic view. 3 for right view and 7 for top view. Hold ctrl while pressing 1, 3 or 7 for back, left and bottom view.

When going into any of these views Blender will toggle orthographic view automatically. To manually toggle between perspective and orthographic view press numpad 5. This was very common in older versions of Blender but in 2.8 we don’t need to worry so much about it since we get the automatic toggle while using 1,3 and 7 now.

A new feature in Blender 2.8 is that you can toggle between the top front right view etc., just by holding alt, middle mouse button and drag left, right, up or down to move between the orthographic head on views.

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Other content you may find valuable:

The ultimate reference photos workflow in a nutshell

Blender 2.8 Basics part 2 Editors and Interface

blender 2.8 basics part 2 editors and interface feature image
blender 2.8 basics part 2 editors and interface feature image

Editors and interface

Just like in the first part there is the video version available and keep scrolling if you prefer a quick read.

How to rearrange the interface

Let us start with learning how the interface works. If we mess up the interface in some way, we can always go back to the file menu and chose ether new and hit general or any of the other presets if we feel adventurous. Keep in mind that the interface layout is bound to the .blend file. This means that any changes we did not save will be lost. So far though, we are only experimenting.

By default, the largest editor is the 3D viewport editor. Most of the action happens here. But we also have the outliner at the top right, properties editor below that and a timeline editor below the 3D viewport.

Keep in mind that all these different editors are not always called editors. The properties editor might just as well be called the properties panel. And the outliner editor is most often just called the outliner.

All editors have an editor type menu. Here we can chose what kind of editor will occupy the space.

We can rearrange the editors in any way we want by moving our mouse to any of the corners of any editor. There is no visual representation to indicate that the corners are active, but the mouse cursor turns into a cross.

Click and drag into the corner into its editor to duplicate the editor. The space that was just occupied by one editor is now split into two. Both have the same editor type. You can change any of the new editors into any other editor type.

To collapse an editor, you move into the corner of an editor and drag backwards over the line dividing the editors. This will collapse the editor witch corner you started from and collapse it into the editor you dragged into. Note the arrow that appears. It indicates the editor that will be left, and it will occupy the space of the editor next to the arrow.

How to work with workspaces

We will cover workspaces in another part but for now, here are some basics.

Workspaces are the row of tabs at the top of the application. By default, you are on the layer workspace. You also have Modelling, Sculpting UV Editing, Shading etc.

Each of these workspaces has a default layout specifically designed for a task or part of the 3D art workflow. You may use some or all these workspaces during a project.

Even if you don’t use the workspaces themselves for the given task you will most likely perform the tasks that they are meant for in a lot of your projects. Note that any task can be performed in any of the workspaces. They are just templates of the interface to get you started on any given task.

You can create your own workspaces by hitting the plus next to the workspace furthest to the right. Click it and add a workspace that is not already added or duplicate your current workspace to start customizing your own from.

The workspace does not only take editor layouts into account but also actual settings in the respective editors. For instance, starting modes, shading presets and other settings.

Great work getting this for, now off you go to the next part where we tackle 3D viewport navigation

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Related content

https://artisticrender.com/blender-2-8-download-install/

Blender 2.8 Basics part 1 Download & install

Blender 2.8 download install basics part 1
Blender 2.8 download install basics part 1

By the end of this article you will have learned how Blender 2.8 download and installation works. You will also know what system requirements are recommended. At the end we will also look at some initial settings that are good to know before starting your Blender adventures.

If you rather watch a short video instead of reading, or perhaps you prefer both. Then here it is.

Prerequisites of blender

These are the system requirements for Blender 2.8 according to blender.org

Minimum

  • 32-bit dual core 2Ghz CPU with SSE2 support
  • 2 GB RAM
  • 1280×768 Display
  • Mouse or trackpad
  • OpenGL 3.3 compatible graphics with 512 MB RAM

Recommended

  • 64-bit quad core CPU
  • 8 GB RAM
  • Full HD display
  • Three button mouse
  • OpenGL 3.3 compatible graphics with 2 GB RAM

Optimal

  • 64-bit eight core CPU
  • 16 GB RAM
  • Full HD displays
  • Three button mouse and graphics tablet
  • Dual OpenGL 3.3 compatible graphics cards with 4 GB RAM

Keep in mind that a 3-button mouse is recommended and for an optimal experience a graphics tablet is good to have. The graphics tablet in this case will be used for drawing or sculpting in most cases. In this basic introductory series, we won’t use a graphics tablet, but we will assume a 3-button mouse.

Blender 2.8 Download and installation

Right now, we will start with Blender 2.8. It is currently in beta, but it is very different from the stable version, and it is also the way forward. So, skip 2.79 at this point and go straight for 2.8 to avoid relearning later.

Go to the blender.org website and click “Download blender 2.79b” It may say 2.80 or later if you are reading this when 2.80 has come out of beta stage. Then hit “Try Blender 2.80 beta” to get to the correct downloads. Select your operating system and download the corresponding 2.80 beta file. Most likely you will need the 64-bit version.
From here, I will assume that you are using Windows.

  • Locate your downloads folder
  • Right click on the zip-file and chose “extract all…”.
  • Chose a location and hit “extract”
  • Browse to the folder and locate the “blender.exe” file and start it.

Initial settings

When starting Blender, we will be presented with a splash screen only viewable the first time we start Blender. This is the quick setup. Here we can change the selection method and spacebar hotkey.
We will stick with left select but change the spacebar to search. The default of “Play” which is playing animation is not useful in as many circumstances as the search. We can change the theme if we want or lad in settings from 2.79 if we have used previous version.

When clicking outside the splash screen a second slash screen will appear. This is the screen we will be presented with every time Blender starts. Click again and in the menu in the top left corner, click edit and go to the bottom where you find “preferences”.

In here there are a lot of settings. We will first visit the keymap category. In the top section you will find those same settings that was available in the quick setup. Here, tick “select all toggles”. This will allow you to toggle selections with the “A” hotkey instead of selecting with A and deselecting with ether “alt+a” or double tapping “A”.


For laptop users, continue to Input, the rest skip to the next paragraph. In the input settings we can make life easier for laptop users that are missing the numpad part of the keyboard. Click the “Emulate Numpad” checkbox in order to have the numbers row above the alphabet act as if they were the numpad numbers. Next, check “Emulate 3 button mouse” if you don’t have a mouse with a scroll wheel in order to make “alt+left click” act as a middle click.

We will now head over to system. Here we can enable CUDA for Nvidia graphics cards or OpenCL for AMD graphics cards. This will allow us to take advantage of the power those graphics cards have to help us accelerate Blender. If you don’t have a dedicated graphics card just stick to none.

Just below the CUDA/OpenCL settings are the memory and limit section. Here we can increase the number of undo steps available while working. I have mine set to 70 to be able to go back further if I realize that I made a significant mistake I need to backtrack to.

As a last step we will enable 3 addons before we dive in. Go to the add-ons category and click in the search bar. Type “f2” and check the box next to the addon. Do the same for the loop tools addon and node wrangler.

When this is done. Hit “save preferences” and hit the X to close the preferences window.

For commenting and feedback, please visit the youtube video page, or for personal messages use the contact page.

Great, now off you go to the next part covering editors and interface.

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Eevee lighting interiors

Eevee lighting interior

Goal

In this article, the goal is to outline the key points that we need to consider when dealing with lighting and reflections in EEVEE for interiors scenes. For cycles, we had the luxury of plug and play with an HDRI. A lot of the time that was enough to get good base lighting for a scene. In Eevee, it is not so simple. We will try to demystify the relevant checkboxes and sliders that come with the benefit of a real-time render engine.
​​​​​​​
Since this is the first time in history that we have a raytraced engine like cycles working with the same shader system as a rasterized render engine for real-time like Eevee. We do not yet know exactly how workflows are going to evolve in the coming years. Most certainly workflows will be faster for shading but the lighting in the two engines are quite different at least at this point.

Eevee lighting interior

Overview

Let’s start to list some of the main terms and tools that Eevee uses for lighting and reflections.

  • Light probes
    • Reflection cube map
    • Reflection plane
    • Irradiance volume
  • Screen space reflection
  • Ambient Occlusion
  • Shadow settings, both global and lamp settings
  • Indirect lighting settings and baking

These are the main terms and tools that we will look at. We will not cover every setting for every area of interest. We will dive deeper into some areas and stay shallower in others. The main goal is to get a good starting point for interior lighting in Eevee and to combat some of the errors that we may encounter.

The main problem we will encounter when working with interior scenes and Eevee is what is known as light bleed. This is when light leaks in at the edges of our interior even if there is no gap in the geometry. We may also run into some reflection issues and other artifacts that is most of the time solved with the settings and tools we will discuss.

When lighting in Eevee there are some restrictions to keep in mind that we did not have in cycles. We do not currently use emission shaders on geometric objects. They are only capable of making reflections right now. Instead, we use the good old trusty light objects. Another limitation is that we do not have access to node-based materials for lights as we do in cycles. We also do best in avoiding HDRI maps since they tend to create light bleed at the edges of our interior. In fact, most light that comes from the outside of our interior scenes tend to create light bleed. But we will look at a way to deal with that.

General settings

Let’s talk about some settings. By default, all the post-processing effects in Eevee are turned off. In the properties, panel go to the render settings and turn on Ambient Occlusion and Screen Space Reflection. In the screen space reflection, you may want to tick the box refraction if you are planning on using glass shaders and untick half res trace if you have a mid to high-end graphics card and your scene is not to complex.

Go to the render settings, we will turn up the samples in the sampling section at the top. In my case, I usually have the samples of the viewport to 256 and for the final render at 512. This is to clean up the soft shadows as much as possible. For rendering animations, I may turn the render samples down to 256 as well if I am in somewhat of a hurry.

Under the shadow settings, we will also make some changes to help us reduce light bleed later. Change the method form ESM to VSM, tick high bit depth and soft shadows. Set the cube size to 512 or higher for best results.

The cubesize setting is applicable to all kinds of light objects except the sun lights where cascade is the shadow map type instead. When lighting an interior from the outside, however, a point light is less prone to giving us issues with light bleed than sunlamps, therefore we will omit using sunlights all together so cascade shadow maps won’t matter for us in this case.

Keep in mind that all these settings are heavy on memory. VSM uses twice the amount of memory as ESM. High bit depth also doubles the memory usage of shadow maps and soft shadows need more samples to get rid of noise which, in turn, requires more computing power from your graphics card. If you have a problem with a slow viewport after changing these settings, consider changing them only when preparing the final render. For a middle ground, use only high bit depth and VSM for now. VSM may have artifact problems when high bit depth is inactive.

Eevee lighting the interior

Lighting from the inside is usually not a big issue and seldom lead to lighting artifacts so if you can keep all lights within the room that will probably be a quite painless experience. The problems start when you try to light a scene from the outside. When lighting from the outside with Blenders default settings in Eevee you will most likely see artifacts in the form of light sipping through at the edges of the room. That is what we call light bleed.

To combat this, we have a few changes to make. The first thing we should do is to add thickness to the walls of our room. If your room is set up like a simple cube you can simply add a solidify modifier and adjust the thickness and see how the light bleed starts to have less and less of an effect.

For best results, make sure that the only light available outside the room comes from point lights, area lamps, and possibly spotlights. Use the point light instead of a sun. Also makes sure that your new point sun has these settings in its shadow settings.

  • Turn on shadows if it’s not already on
  • Set clip start to 0.1
  • Softness to 0.0
  • Bias to 0.001
  • Exponent can be left at 2.5
  • Bleed Bias set to 0.1
  • Turn on contact shadows
  • In the contact shadows set softness to 2.0

With these settings on a point light instead of a sun together with the general settings we did earlier we should be able to handle most artifacts as long as our walls have some thickness.

To add some skylight to this, go to your world settings and instead of adding an HDRI to light the scene, stick with a background color to fill the scene with ambient light. For instance, tint the color towards a light blue and set the strength to somewhere around 4-10 to simulate some skylight.

If you want to light without the directional light from a point light, acting as the sun, you can also light with area lights right outside the windows of your scene. Keep in mind that it is important not to put the lights inside the wall or that can also result in light bleed.

Indirect lighting using irradiance volumes

So far, we have the direct light in our scene, but what about indirect light? In cycles, indirect light is calculated as we render. But in Eevee, indirect light is calculated beforehand. To get indirect light we use an irradiance volume. An irradiance volume is a grid of points that capture indirect light during a baking process. When the bake is done, the irradiance volume will work as light itself and light the scene with the indirect light captured during the bake. At least in theory.

The irradiance volume will use the closest point of captured light for indirect light. This means that if we have capture points outside of our interior or inside the walls or other objects in our interior, we will capture light either from the outside or from inside one of our interior objects. In those cases, we will have different artifacts depending on our light setup because if a capture point of our irradiance volume is just outside a wall it will cast indirect light coming from the outside on the inside of our wall.

It is therefore essential that all our capture points are inside our interior, capturing the indirect light that we actually do want to capture and as long as we do, it might help to think of them as lights.

 

To add an irradiance volume hit “shift+a” and go to light probe. There you will find irradiance volume. You can move, rotate and scale the volume just like any other object. Position it so that it fits within your interior. If your interior happened to have an L-shape or other shapes that can’t be occupied by a single irradiance volume, then add as many volumes as you need and put them into place. The dots capturing the light should not be overlapping but place the volumes so that together they form a continuous even grid for the entire area where you want to capture light.

To change the number of dots in a given axis of the irradiance volume, go into the settings and change the resolution. As a rule, fewer dots will cause fewer problems and more even light. I usually go with about 1 dot per 1.5-2 meters of space. Sometimes fewer. Now, let’s bake the indirect light. Go to the render settings and find the indirect light section. Hit “bake indirect lighting”. This will also bake any reflection probes that you might have. More on those later.

When the baking is done you can preview what each point has captured in the general render settings. Go to the indirect lighting section and find the display subsection. Tick the eye icon next to the slider for irradiance size and increase the size of the preview. Just right above that you can preview reflection probes as well.

If you experience problems after the bake it might be that one or more of your dots may be inside one of your furniture or other objects inside your scene. If this is the case, put those objects in a separate layer and disable them for rendering in the outliner. Try to bake again and bring the objects back after the bake has finished. The objects will still get indirect light cast onto them, but they will not cast any indirect light themselves. This workaround will most likely be good enough for most scenarios if you can’t adjust the position of your irradiance volume probes.

Sometimes a given number of probes for an area can create light bleed. Therefore, if you have light bleed with no apparent reason, try to decrease or increase the resolution of the irradiance volume one or two steps in either direction and bake again to see if you get rid of the light bleed.

Windows and light

When it comes to windows and glass materials, if we don’t need the reflections on the glass, removing it completely is a reasonable way to go. If we want the reflections and roughness on the window, however, this is a usable node setup for the glass.

It is not physically correct, in Eevee, what is? It gives you some entry points to work with roughness and reflections of the window. To change how much or how little reflection you want, adjust the curve and to change the roughness, plug in any texture to the roughness of the glossy node or use the slider for a uniform roughness. When using a window, make sure that it is also turned off while baking the light and cube maps. Also, keep in mind that the geometry for the window has some thickness to it and uses flat shading.

Reflections

We have covered both direct and indirect lighting for interiors. Now it is time to think about reflections. We have a few options to work with. The first one is screen space reflection. It will reflect anything that is visible on our screen. If it is not visible it will not reflect. For things that are not within our view, we will need a light probe. The light probes concerning reflections are the reflection cube map and the reflection plane. The reflection cube map can be set to either a sphere or a box. For most interiors, we have rooms that are square shaped and therefore we use the box alternative for most of our interiors. For outdoor scenarios, a sphere will be more likely to work.

Screen space reflection does not need to be baked. It is the primary means for us to get reflections in Eevee. Just like any of the reflection objects, (cube/sphere or plane) it works for any material with a reflective property and will reflect anything that we can see directly on the screen. Probes are the secondary means of reflections and will complement any reflection from screen space. The reflection objects, however, need to be baked.

In the render settings, we can bake reflections independently from indirect light. This is useful if we need to remove or hide objects in our scene in order to bake the indirect light without issues and then bring the objects back for baking the reflections.

Screen space reflection may be enough for some of our scenes but when we have objects reflected that is not on the screen, perhaps behind the camera or around a corner or just behind an object that is in the scene, the reflection probes become handy. They will make a mirror from their location and use that mirror as the reflection for any object within their range. It is not 100% accurate but it gives us a close approximation.

A reflection cubemap has quite few properties. What we need to keep in mind is first the distance. The distance determines what objects will be influenced and therefore reflect the data that the probe collects. Then we need to keep track of the clipping start and end. This is where the probe will start and end its collection of surrounding to reflect. The start can be important if the probe is placed inside another object. In those cases, the start clipping could be set just outside that object. When dealing with interiors the clipping should end beyond any walls to have the probe reflect anything within the interior.

Reflection planes will mostly be used for mirrors or highly reflective flat surfaces. In look dev viewport mode or rendered viewport mode, move the plane closer and closer to the reflective surface until it looks correct. At that point, the plane will reflect as intended. The distance value will determine how far away from the reflection plane a reflective surface could be to be affected. Normally you use one plane for every highly reflective surface that you have. Also, make sure to scale the reflection plane slightly larger than the surface, rather than slightly too small.

Summary

We have taken a quite technical look at settings in Eevee that are important for lighting and reflections in interior scenes. These are some of the key takeaways to keep in mind.

  • Use ambient occlusion and screen space reflection.
  • Use VSM rather than ESM for interiors.
  • High bit depth and soft shadows together with a sample of 256 or 512.

Lighting from the outside is prone to light bleed. These settings on a point lamp together with a light blue ambient world color is a good starting point for daylight lighting from the outside.

  • Turn on shadows if it’s not already on
  • Set clip start to 0.1
  • Softness to 0.0
  • Bias to 0.001
  • Exponent can be left at 2.5
  • Bleed Bias set to 0.1
  • Turn on contact shadows
  • In the contact shadows set softness to 2.0
 
  • An irradiance volume stores indirect light prebaked.
  • It is important that the individual sample points of the irradiance volume are inside our interior scene.
  • Use transparent shader instead of glass shader for windows
  • Screenspace reflection is a good start for reflections but reflection probes will help fill in the spots we can’t see directly from the camera.
I hope you learned something new about Eevee lighting

Much of the information in this article come from this thread on blenderartists.org

Get our free workflow cheatsheet!

Join our newsletter and get updates of our news and content as well as our 3D modelling workflow cheat sheet.

Other content you may like

Physically based rendering, blender nodes, with brick texture example

Showcase image for pbr nodes with brick texture example

In the last post, physically based rendering and Blender materials, we looked at how the principled shader really works. We lay the foundation for our future material creation in Cycles and for Eevee when Blender 2.8 finally arrives. Here, we will take a close look at how this is all implemented in the node editor using image textures to power the principled shader. This setup will then be supported by various other nodes to give us a system to work with when layering different materials on top of each other.
If you missed the previous post here is the summary and key take away points when using the principled shader in Blender.

  • You should use non-color data for all your textures except base color for both metals and nonmetals
  • For metals, keep your color values in the lightest 30% srgb color space in the base color map.
  • For dielectrics keep the color values above the 10% darkest and below the 5% lightest for the base color map.
  • In most cases, the metallic input is either 1 for metals or 0 for dielectrics. Seldomly much in between.
  • When the metallic input is 1, the specular has no effect. The specular is instead calculated from the base color.
  • Roughness is the most artistic map, use it to tell the story of your object
  • The normal map is angle data for outgoing light rays and not height information.

For any material that we can power with a set of image textures that are prepared for the metallic workflow, the system that we will discuss here will work very well and be very efficient.

If you want to follow along you can read the next section or skip to the “Brick texture and concrete combination” section to get right into the good stuff.

Setup the brick texture example

For demonstration and testing we will be using a set of image textures that is provided here:

bricks_texture_example_assets.zip

They are provided under the cc0 license originally from cc0textures.com. The hdri is also licensed under cc0 and collected from hdrihaven.com.

For this workflow guide, we will be adding a sphere, and UV unwrap it with the sphere projection option while viewing the sphere in orthographic view from the front by hitting number pad “5” followed by “1”. Also, add a material slot and name it.

showing the uv map preparation

Position the camera and set the resolution to a square like 1024 by 1024 or maybe 1920 by 1920. Move the resolution up from the default 50% to 100%.

Next, we will enable the node wrangler addon by going to user preferences. “ctrl+alt+u” for the keyboard-oriented Blender artist. Go to the addon section and start typing “wrangler” to filter the list in real-time. Check the box next to the node wrangler addon, hit “save user settings” and we are set so far.

Last preparation will be to add the hdri image. Go to the node editor, select environment material on the earth icon and add an environment texture node. Browse for the image and add it. If not already selected, select the environment texture node and hit “ctrl+t” This will add a texture coordinate node and the mapping node. This is a function of the node wrangler addon. Using the z rotation in the mapping node we can now rotate our hdri.

Enough with the boring setup stuff, let’s get on with the show!

Brick texture and concrete combination

Our example will be a brick texture where we want to introduce patches of concrete where the bricks have fallen off. We will also add some dirt. Each layer will be added in a slightly different way because of their role in the full material. What we need to remember here though is that this is not a guide to create a brick material. We are here to learn a highly customizable and flexible system for creating materials that we can use repeatedly for most materials.

The image below shows the basic setup for a dielectric PBR material with the standard maps.

  • Color
  • Roughness
  • Normal
brick-texture-setup

We can add this setup quickly by selecting the principled shader and hit “ctrl+shift+t” and select all the maps that we need for the material. If the maps are named properly, the node wrangler addon will set up the rest for us like this. We don’t have to worry about what image textures should be set up for color or non-color data, the normal map will get its corresponding normal map node etc. If we have other maps, like a metallic or specular map those will also be added incorrectly. A displacement map, however, will be added for the displacement input of the material output node. We can, however, skip the displacement or combine it with our normal map through a bump node like this.

Our brick material is done. The first combination will be with a concrete material. Start by duplicating the principled shader and use “ctrl+shit+t” again and this time select the color, roughness and normal maps for the concrete material. The new concrete will be added above the bricks in this example.

At this stage, we will combine the two materials with a mix shader and get a very ugly mix between the two. Instead of blending the materials we want to tell what material goes where and for this we will use a mask. A mask is just a black and white texture. We can use any image, procedural texture or combination to create the mask, but we will go with the simplest possible and use the procedural noise texture node as the mask. Add it, select it and hit “ctrl+t” to automatically add the mapping and texture coordinate nodes. Then connect the noise texture to a color ramp node before connecting the color ramp to the fac input of the mix shader. This is what it looks like.

To tune in the mask, it is easier for us if we set up Blender with a layout like the image below. We have a rendered view with the render border active to minimize the screen area we need to render. Create it with “ctrl+b” and clear it with “ctrl+alt+b”. To the right, we have the nodes we need to work with available as well.

Use “ctrl+shift+mouse click” on the color ramp to create a temporary view of what the node outputs. To reset the view back to our material “ctrl+shift+mouse click” on the mix shader which is the last shader node in our material chain before connecting to the material output node.

Bring the two flags of the color ramp close to each other to create a high contrast map. Also, change the details value of the noise texture to 16 to create a more natural border between black and white. Now “ctrl+shift+mouse click” the mix shader to preview the mix we created. From here only tweaks remain until you are happy with the result. For me, I went ahead and inverted my color ramp and set the scale of the noise texture to 3.

This combination of materials was easy enough. It is the basics for combining any PBR materials in Blender. From here we can take any principled shader and all the nodes connecting into it and group them and throw the group into any other cycles material for combination with other materials. All we need is a set of three maps for each material and a mask to tell what material goes where.

Leaking effect

Next, we will look at how we can combine this with a material that needs a very specific placement on top of our other materials. We will add some leaking effects that should start from the top and fade out as it comes further down our sphere. We will require a new specific UV map for this effect as well as a mask that masks out the exact area for the effect. In this case, we are lucky enough that a mask image is provided so we will use that. However, this is not always the case and sometimes you will have to create your own or tweak an alpha channel and use that as a mask. We will look at how to do this as well.

Let’s start with adding a new principled shader and import the leaking set of textures. Then add a mix shader between our existing mix shader and the output node. Connect the leaking principled shader at the empty slot and we should get this.

Now delete the texture coordinate and mapping node for this newly added material and add the UV map node. With this, we can specify a new UV map. Though we will need to create it first.

If you are not very familiar with UV maps, just follow along. It won’t be a very hard process. Remember, this is a system that should be easy, right? It may seem daunting right now but trust me, it is the same operations over and over with slight tweaks and adjustments. You already know the basics.

Creating the leaking UV map

Start by bringing in a uv/image editor and in the properties panel, go to the object data tab that looks like a triangle and click plus in the UV maps section. This will add a copy of our original UV map. I chose to rename mine to “leaking”. Fill in the name in the UV map node that we added previously.

If we select the new UV map we can alter it. We can scale down the parts of the mesh that should not have any leaking effect and hide it in a black area in the image and the parts that should have the leaking effect is now adjustable with pixel precision inside the uv/image editor.

My leaking UV map looks like this for now since I want the effect of most of my sphere and the top and bottom will not be visible in the image.

And now the mask

Back in the node editor, we will add some nodes for our mask image that we happened to have in this case.

You can see that we have the same concept as before, only this time we have an image mask instead of a noise texture and therefore also UV coordinates to power it. The color ramp is adjusted based on the images values. The important things to remember here is to use “ctrl+shift+mouse click” to preview the color ramps output and adjust it accordingly. In this case, I did not want the mask to go from complete black to complete white, so I darkened the white a bit so that the underlying material would come through slightly.

If we needed to use the alpha mask instead we would use that instead of the mask and the node editor would look like this.

We can also just use the image color itself and collapse the black and white range to generate a mask. It could look like this.

Note the shift from using the alpha output from the image texture to the color output since we may not have an alpha channel to work with.

At this point, we have arrived at additional tweaking. Usually, I tweak each material to look the way I want it right after adding it and before adding in the next one. However, I figured that it would be easier to follow if we left it for the end.

Individual material tweaks

We are going to look at a couple of ways to add flavor to our material before we go to the summary. The most noteworthy tweaks is introducing some color variation to the individual materials and a way to create a more distinct border between our bricks and concrete. We will also tweak the roughness.

Let’s start by adding some color variation to the bricks. Zoom in to the part of the material where the bricks live and look at the color map. To introduce some variation, we will have to first create the variation and then mask where the variation should be applied and where the original color should live. We can do this in a pretty similar way to how we have done it with the mix shader earlier but this time with a mixrgb node. The mixrgb node will serve as the mixer. Though we still need something to mix and a way to mix. Add a hue/saturation/value node to generate a slight variation to our texture. I will set mine to these values.

  • Hue 0.48
  • Saturation 1.2
  • Value 0.8

Now we have a fac input available in the mix shader where we will use the same trick as before. Combine a noise texture with the details set to 16 with its corresponding texture coordinate and mapping node using “ctrl+t” while selecting the noise texture. Then add a color ramp between the noise texture and mixrgb. Collapse the color ramp to bring it to the black and white mask that we want.
This is how I ended up setting up the color variation for a very slight difference in color across the surface.

You have quite a lot of parameters at your disposal to get the noise the way you want. You can adjust rotation, location and scale for any single axis in the mapping node to get a stretched or just different effect. You can also try using object or camera coordinates to generate different noises. You can also try to add more flags to the color ramp and play with those values to have complete control over the transition between light and dark.

Creating a more distinct transition between bricks and concrete.

For this part, we will drive our effect from the mask separating the bricks from the concrete and then feed it through a bump node that we combine with the existing normal map data in the concrete material. Look at the edge between the bricks and the concrete in this image to see what effect we are after.

If you have ever used a 2D image manipulation program like photoshop, gimp or affinity photo you know that you can select part of the image and have the marching ants show the way, right? We will kind of do the same here now, but we will mathematically tell Blender what parts we want to select, again using masks. Right after the mask dictating over brick vs concrete distribution add an invert node to invert the mask and then feed it through a new color ramp. These nodes should not have anything connected to their outputs right now. Instead, hit “ctrl+shift+mouse click” to see what the color ramp output looks like.

Move the black flag of the new color ramp towards a position of 0.6 or 0.7.

Add a new mixrgb node and set the blend mode to “linear light”. Connect the new color ramp in the top socket and the color ramp from the first color ramp in the second socket. Then add another color ramp after the linear light mix node. It will look like this.

Bring the white flag of the new color ramp to about position 0.2 to collapse the gray tone that the linear light left behind.
Now add a bump node in the concrete material between the normal map node and the principled shader. Then connect the last color ramp in the chain to the height input of the bump node and the effect is done.
We now have something like this.

A material is never finished. We could continue to add color ramps between the roughness maps and their corresponding principled shader or we could add variation to the leaking texture color for example. There are many possibilities.

Summary of the brick texture example and the system

The big takeaways from this article is not the bricks or the concrete but the flow of nodes. This way we can easily take a material or a mask and create a group out of it and have a very easy and flexible system to work with in order to layer different materials on top of each other. We can also present the material much easier. Take a look at this for example. Here I have created groups by selecting nodes and using “ctrl+g” to group them and renamed the groups in the “n” panel. Using this way of creating materials gives you a very good way to reuse groups of nodes.

This is a very solid foundation for building your materials and it is also compatible with the upcoming Blender 2.8 version and its real-time engine Eevee.

Physically based rendering and Blender materials

Physically based rendering and blender materials

Physically based rendering in Blender has been a guesswork for some time. With 2.79 however comes the principled shader. It will help you to create accurate blender materials for cycles. However, there still seems to be some confusion on how it works. Let’s get a closer look at it and nail Physically based rendering once and for all.

Physically based rendering or PBR for short is a way for ray traced render engines such as cycles to accurately describe a material. It lets the artist focus on the artistic side to a larger extent and leaves out how to deal with more technical issues. This information is a set of guidelines and is not written in stone. It is here for you to get an idea of how PBR works with the principled shader. Once you understand it you can break and bend the rules to your will, but a good foundation to start from is better than to dive in without knowing how the shader reacts to different settings or texture map inputs.

Two different workflows

PBR can be divided into two different workflows. Within the realm of Physically based rendering you use one or the other. They both give the same result in the end. The difference is the texture maps that are used to describe the final material. These are the two workflows.

  • Metallic/Roughness
  • Specular/Glossiness

The new principled shader is geared towards the metallic/roughness workflow and that will be our focus here. It is the most common workflow but you should be aware that there exists another one as well. However specular/glossiness is not natively supported in Blender by any one single shader.

Physically based rendering texture maps

When dealing with the Metallic/Roughness workflow we have three specific texture maps to work with.

  • Base color(Diffuse, Albedo)
  • Metallic
  • Roughness(or inverted glossiness map)

We also have a few that are also shared with the specular/glossiness workflow. They are the following.

  • Normal
  • Height/Displacement
  • Ambient occlusion(AO)

For most materials, we are concerned with the first set of maps and the normal map. The height and AO maps are optional. With these four maps we will be able to create a lot of the materials that we see around us every day. Before we talk more about maps however, let us first take a brief look at fresnel followed by color spaces.

Diffuse

Metallic

 

Roughness

Normal

Fresnel and specular

What is fresnel? First of all the pronunciation is with a silent s. Second, it is determining the falloff of specular from the viewing angle of 0 degrees to the edge of an object. Fresnel at a 0 degree angle is also referred to as F0. This is a property that all materials have.

The most clear example of this is when you are standing at the edge of a calm lake and you are looking down and you can see through the water clearly, but when you look in the distance the water becomes more and more like a mirror. This is fresnel and when viewing at a 0 degree angle the specular varies heavily depending on if you are looking at a metal or a non-metal material.

If we are looking at a metal straight on the F0 is between 70-100% depending on what metal we are looking at. For non-metals, this value is between 0% and 8% in most cases. At the edge of the object the specular is almost always at or near 100%. In the case of looking at a sphere it becomes clear. The edges will become more and more specular as its surface turns away from us.

Principled shader used with a dark red #531D21 base color with a roughness of 0.2 lit by an hdr from hdrihaven.com

Linear color space vs SRGB

Let us deal with the color space stuff now. The computer stores images in linear space. This means that there is an equal amount of color change between every shade from black to white. However, this is how the computer reads data, our human eye does not see color this way. So in order to save space and not send data to the screen that humans can’t see, we encode the data to give more space for color information in the ranges that we can see. This encoding is also called gamma correction.

So how do we see color? Well, what you need to know is that the monitor outputs color in SRGB color space. SRGB color space is adjusted so that we make use of more color space in the ranges that we can actually see. This means that before the image is sent to your screen the computer encodes the linear image to SRGB so that we can see a richer image.

Top gradient shows the linear color space as humans see it. This is then encoded to srgb for a more smooth gradient.

In Blender, when we add an image texture to our cycles material we have the opportunity to decide if the image should be converted to srgb before the node sends the image away as input for the next node to interpret. This is the default behaviour. However if we change the image drop down menu from color to non-color data we tell the node to not gamma correct the image, just send it over to the next node as it is in linear space. This is useful because all the maps that are not base color should be non-color data. They are not there for our eyes to see but for the shader to know what parts of the material does what. It is there for the computer to read, and the computer reads color in linear space.

The general rule becomes: Set all your texture maps to non-color data except the base color.

If you want to create the encoding yourself in Blender you can use the gamma node and set it to a value of 2.2. It is not exactly the same as the srgb encoding but it is so close that you probably can’t see a difference.

Metals and the principled shader

Now when we know a bit about color spaces and fresnel we will continue on with the inputs of the principled shader.

If we deal with a fully metallic material we set the metallic slider to 1 and no texture map is needed. Same goes for materials that is non metals. Set the slider to 0 and you are good to go. For any material that requires a combination you will need a texture map to tell Blender what areas of the material is metal and what is not. This map should be grayscale set to non-color data.

You can also set this to any value in between 0 and 1 but that will create a material that does not exist. Sometimes this can be useful though. Imagine that you are creating a metallic surface that has a dust layer on it, then this value can be tweaked to something in between to simulate the dust. It works but it is not accurate. You can also use gray values to blend between a metallic and dielectric material where their edges meet.

The metallic input also dictates how other parameters of the principled shader behave. When the input is set to 1 the specular slider has no effect. Same goes for specular tint. Changes to these inputs makes no difference. Instead, the specular data is coming from the base color map. I will repeat that. Specular data for a metal using the metallic/roughness workflow in a physically based rendering scenario will come from the base color map.

The base color map will still be set to color data even if it is used to determine specularity of metals. It is still also determining the color or specular tint if you will. Metals has no diffuse aspect to it so it makes sense to switch the behaviour of the base color input for metals. After all, it contains three times as much data then a grayscale map.

Most real world metals has a reflectance value between 70-100% at Fresnel 0. For us that means that the value of any pixel in our base color map should have a value of around 0.7 or higher. If we have a color texture map as input it means that the pixels should be in the brightest 30% in srgb color space.

If we input a color ourselves using the color wheel widget to set a solid color we will have to look at the hex values. The rgb and hsv set of sliders are in linear space so the values won’t be correct but the output will. This is not very convenient however so i usually stick to the hsv sliders and for metals I don’t let the value slider go below 0.7. The accuracy of this, well, good enough for me.

If you want real consistency however you should look up the correct colors for any given metal that you want to recreate.

Dielectrics and the principled shader

Now let’s move the metallic slider to 0. We are now in the realm of dielectrics or non-metal materials and the specular slider is in full effect!

However, we are in the metallic/roughness workflow and the specular input here works very different from the specular/gloss workflow and how we used to work. The input slider goes from 0 to 1 but you can set it to values higher than 1 by typing in the value. This 0 to 1 range is mapped to 0% to 8% specular. In the default setting of 0.5 we therefore have 4% specular. This is the most common range of specular for dielectric materials and in most cases the default 0.5 value does not need to change. Not very exciting. The slider is more of an artistic tool that you can tweak to squeeze some extra “oh yeah!” out of your material. It is not meant to create a 100% specular metallic. That we have the metallic slider for and when working with metals the specular is in the base color as we have learned.

Over to the base color. Now this value has nothing to do with specular for non-metals and is pure reflected color. Since we don’t want this to contain light or shadow information we should not let the image contain pure black or pure white. Try to leave the darkest pixels about 10% lighter than black and the lightest pixels about 5% darker than white in srgb color space for your base color map.

So, this leaves us with the roughness and normal maps.

Normal map and roughness

Now we will leave the realm of technical terms and head into the artistic mist of “it depends”, “tastes” and other interesting stuff that can’t be defined. Don’t be fooled though, we are still talking about pyhsically based rendering.

We start with the roughness map. This is the most artistic map and can be used to tell the story of your object. You can add scratches, dust, fingerprints or water vapor just to name a few. There is no real rules here other than experimenting and making the best combination of properties that will tell the tale of what the object has been through. It is a grayscale map that will determine no roughness at black or 0 and full roughness at 1 or white.

The normal map is often mistaken as meaning height information. But the normal map actually contains angle data. It determines the direction in which an incoming light ray will bounce of in. The result is however similar to height information in that it simulates geometry changes. It is another artistic map that helps to create more detail in an object that is way more efficient than using real geometry. Pipe it through a normal map node and into the normal input of the principled shader to use it. It can also be combined with height information from a height map using the bump node.

The other inputs

That is a mouthful of physically based rendering and blender materials with the new shader. Let’s take a quick look at the rest of the inputs before we move over to the summary. These are specific for special types of material like skin, carpaint, fabric or glass.

We start with the sheen and sheen tint. Sheen tint only has an effect if sheen is not 0. It is intended to help simulate cloth. It adds a soft white reflection around the edges. The sheen tint mixes in the base color into the reflection. This is useful when trying to make a cloth material.

The Anisotropic input is used to stretch the reflection of an object. Think of a brushed metal with a circular pattern such as the underside of a frying pan. Instead of having a normal map or geometry to simulate the circular pattern that gives it the stretch you can turn this input up to simulate it. The Anisotropic Rotation dictates the reflections rotation and at the bottom of the shader you have a tangent input that also can be used to affect the rotation in a more precise manner by inputting vector data. Both the tangent and anisotropic rotation hs no effect if anisotropic is at value 0.

Next we have the clear coat, clear coat roughness and clearcoat normal inputs. These are here to add an extra layer of specular on top of the material. Think of car paint that has these deep reflections. The clearcoat roughness is there to give this layer its own roughness and same goes for the normal. In a lot of cases the clearcoat normal is there so that you can input the same normal map into that input as you ordinary normal input. But in very rare cases you may want to have different normal maps for the two layers. Same thing here, the clear coat roughness and clear coat normal has no effect if the clear coat input is set to 0.

The IOR input only has effect if used together with the transmission input. The transmission input allows you to create glass and ice materials with the principled shader. The IOR will dictate the change in angle for lightrays going through the object.

Lastly you have the subsurface scattering(SSS) inputs that helps to create, you guessed it, subsurface scattering. It uses a different method for calculating subsurface scattering than the older SSS shader so the results will differ slightly from the regular SSS. But it is here to make sure that we can use this one shader to create the largest amount of materials possible without having to use different shaders and combinations. You also guessed right when you assumed that the subsurface radius and subsurface color has no effect if the subsurface input is set to 0.

Summary

There are of course other areas to consider as well like lighting and post processing. You should also use the filmic color management in Blender to make sure that you have a wider dynamic range available for more realistic renders. If you find any errors, please contact me to let me know so that I can make a change. Physically based rendering is important not only for realism but for consistency as well.

Anyway, what are the important values to take from this? 

  • You should use non-color data for all your textures except base color for both metals and nonmetals
  • For metals, keep your color values in the lightest 30% srgb color space in the base color map.
  • For dielectrics keep the color values above the 10% darkest and below the 5% lightest for the base color map.
  • In most cases the metallic input is either 1 for metals or 0 for dielectrics. Seldomly much in between.
  • When the metallic input is 1, the specular has no effect. The specular is instead calculated from the base color.
  • Roughness is the most artistic map, use it to tell the story of your object
  • The normal map is angle data for outgoing light rays and not height information.

Below is a list of links to some of the sources for this article. If you want more, check out our other articles and tutorials. A handful is linked below.

The ultimate reference photos workflow in a nutshell

Imagine that you have just decided what is going to be your next 3D project and you are thinking bout where to start. Well reference photos of course. You should always start with reference and you should keep them around through your whole project. Pinterest is a great tool to help you sort and organize your reference images and in this article, we will walk through how we can use it together with Kuadro and Downalbum to get good control over the reference we chose to use.

Pinterest is a kind of social media platform that is not very social at all. But it is a very good way to keep track of and sorting images that you find across the web or that images that other people have already pinned. Pinning is just a word that Pinterest use to say that an image has ben saved to a board. A board in turn is a folder that is ether public or private.

First off, creating an account. Go to pinterest.com and you will immediately be presented with a form to create an account. You can ether enter an email and password or login through an existing Facebook or google account. Personally, I always use the e-mail approach because if I ever have problems with one of my other social media accounts my account for the given web service, like Pinterest in this case will be a separate stand-alone account that I still will be able to access.

Create pinterest account

Once your info is entered you will have to confirm your email address, or not if you chose one of the other methods and then you will be ready to start. Pinterest will first ask you a little bit about what you like. Kind of like a wizard to walk through to get your account started and filled with some content to show on your front page. Once inside click on your name in the top bar.

Pinterest header bar

Here you can see that you have the option to create a board or a secret board. A secret board will only be accessible to you and no other people on Pinterest can see or use them. This is usually where I start but then I might turn a board into a regular shared board once it has begun to be populated.

Now we can start to collect our reference photos. We will start by staying within Pinterest and search for references that other people have already pinned and shared. For example, I have been interested in making a scene with a medieval or older bridge, so I start with those search terms. When you find an image that you like you just hover the mouse over it and click save. You will then be prompted to choose the board that you want to save this pin on. If you have multiple boards the board will then be bumped to the top of the list after a pin, so you don’t have to find it for every pin you make. Keep on trying search terms related to your subject and you will son have a well populated board of images related to your subject.

Pin image brdige

A few tips on searching for reference photos

With the medieval bridge as an example I might want to search for bricks to get good closeup images of bridges to add to the board. I might be inclined to use words like fence because most bridges have a fence or railing to hold on to. Keep narrowing down the search terms to individual pieces. You can also search for the materials that those pieces are made of. I might want a stone bridge with a rusty metal railing. Maybe I can find a good-looking balcony that can help me with that railing?

When your board is getting filled with enough reference photos. Maybe 50 pins or above depending on your project of course, you can click on your profile image / name in the top bar again and select your board to view it in all its glory.

Now if you want to pin images from another sources Pinterest has a great browser plugin. Go to this link(https://help.pinterest.com/en/articles/all-about-pinterest-browser-button#Web) and chose your browser to get the instructions on how to install the plugin. When it’s installed it may work a bit differently in different browser. For instance, in Chrome you get a save icon whenever you hover an image anywhere on the web. Click it and chose your board. Simple as that. You can also click the Pinterest icon in the browser header to get a listing of all the images on the current page to easier find and pin multiple images from the same site.

Now that is the basics of using Pinterest as a tool for organizing reference images onto boards. One of the downsides of Pinterest though is that you can’t rearrange the pins inside the board. They will be added in the order you pin them. To combat this, we will now investigate how we can download an entire board and then use a program like Kuadro or PureRef to view our reference in a customized organized way.

Download and view a board on your computer

The software that we will need to follow along is the following

Chrome you probably already got, Downalbum is just a button to click to add the extension next to our already added Pinterest extension. Kuadro in turn is just a download and start and it will run as a tray icon. No installation. We will assume that you have downloaded and installed all the above software.

To start off you use Chrome to browse to the board you want to download. Next you will use the DownAlbun by clicking on it’s icon. It will become colored if the site you are on is compatible. Chose “Normal” in the interface that comes up. Then click output after a few short seconds depending on how large your board is. Now you will be prompted with the pinned images in a different interface. At the top it says press ctrl+s… We better obey. You will get prompted to save an html file. Name it to something suitable or leave it. Wherever you create this file a subfolder will be created with the same name as the file with a “_files” added to it. Click the up arrow next to the newly downloaded file and chose to open in folder. The subfolder will be inside containing all the downloaded images. We will also have a file with the extension .download and one with extension .css. You can delete these files as well as the html file. The board is now downloaded.

Downalbum Icon
Downalbum output
Downalbum save image

Now open Kuadro. It will be run, and a tray icon will be added down by the clock. Click it add select “Add local image” Browse to the folder of the downloaded images and select them all. Hit open. They will be stacked on top of each other so start dragging the top ones around to view the ones below.

You are now ready to arrange the board on your desktop. Perhaps on a second monitor. I find this to be a good workflow to get your reference photos arranged well both online and locally on your hard drive as well as viewable in a nice predictable way.

Kuadro is a very nice software to display reference photos. Click the tray icon and chose about to learn more about how it can be used to resize, pan around and rotate the images as well as some other features. The shortcuts I use the most are listed here.

  • Click and drag to move
  • Hover corners of active image to resize it’s canvas
  • Mousewheel to zoom
  • Middle mouse click and hold to pan if image is zoomed
  • H or V to flip the image, sometimes this feature gets stuck, zoom and move the image a bit usually resolves this.
  • G for grayscale
  • Hold T and drag with the left mouse to decrease or increase transparency of the image
  • Right click on image for menu
Kuadro logo
Kuadro menu

21 resources for artists you may not know about

As 3D artists, we are always on the hunt for good resources. I like to add more and more bookmarks to my browser as often as I get the chance. Here I have tried to gather some resources that I think is less known or at least less talked about in the 3D artists community. The quality may vary. Have a look for yourself and see if anything interests you. The list contains both software and websites.

Websites

Let’s start with some web resource, all the resources are on the web but duh! You know what I mean.

First off, some texture resources.

Textures.com is a very well known site for textures. In the Blender community, we also got poliigon.com that was started by Blender guru. Both well known. Textures.com have a limited number of downloads for their lower resolution textures as well as a high-resolution texture every day that you can download. Both sites have their own take on royalty free license but they are both similar with what they allow.

A couple of other resources that is cc0 and pretty impressive is first, the Chocofur material library. All their textures are free and cc0 licensed. You only need an account to download the entire library of textures. The second cc0 texture resource that I want to share is at yethiel.wordpress.com. They have a shared library of about 5GB of cc0 texture resources that can be downloaded.

Enough with the textures already let’s get on to some images.

Pixabay.com is pretty well known, they have over 1.1 million cc0 images now, not long ago I remember it to be around 600k and it has grown fast. Now there is some alternatives to this site that also follow the same or similar licensing. They are the following in no order.

Now for some places to gather reference images.

The above sites can be used for that but here is some extra. Pinterest is the most common place for finding reference today I think and it is currently not matched in my opinion. Just make sure you draw inspiration from multiple images instead of copying straight off. Anyway, for architectural rendering Houzz is an awesome website to get some nice reference as well. You can make an account and save images in idea books to organize them.  Just as a side note for any fantasy or character artists. I don’t know much of resources for that other than some brilliant games. But I am sure you already figured that out. Instead I want to point your attention to seventhsanctum.com they don’t have any images but what they do have is a whole lot of generators. Sometimes you need a story for a character and a randomly generated text for a character may sometimes be a good place to start. Or why not generate a name or a weapon.

Ok I don’t really have any good HDRI resources to share so I will move on to some software that I use together with Blender. They can also be used together with other 3d packages as well I’m sure.

Software

The obvious ones are complementary 2D applications Krita and gimp. I myself don’t use gimp but I use Krita quite a bit. No point for me to talk about those though sense most information is told and told again over thousands of times for those software. Anyway, I got some other stuff to share. First off is Kuadro. It’s a lightweight software for loading reference images. It’s great for resizing and moving images around on a second monitor to glance over at while you work. Another great place to look is at gravit.io. They have a a cool vector graphics app as well as an in-browser app for graphics design. Then again, talking about vector graphics and not mentioning Inkscape as an open source alternative would not be fair.

Before you get to work though you may need to do some prep work and sure, OneNote or Evernote can be good tools to gather information about what you will be making but I also want to give a little hint in the direction of Xmind. It’s a Mind mapping application that can be nice to use for fleshing out ideas or planning. Okok I got a few more to share. Next up is Rawtherapee. It’s kind of like a Lightroom alternative. It’s not as simple and uses a more technical terminology and the functionality probably doesn’t overlap 100% But it’s good for postprocessing of single images. Now some 3D guys out there like to photograph their own textures and for them, Digicam Control can be a cool software to check out if you have a DSLR for the task. It’s basically for controlling your camera from the computer. It allows for some pretty fine adjustments and control over settings.

Now we are coming to the end. Up last is the Sweet Home 3D software. I don’t know how well known it is but it got some nice functionality and you can export most of what you make in sweet home 3d to SVG or OBJ format that you can import into blender or other 3d package. For instance, you can make your own floorplan and ether export and import to blender or just take a screenshot and have the image as background image in blender as a guide to model your house. Ok I hope you found some new resource that you didn’t know about before that can be of help to you. If not, well at least I enjoyed writing about it.

Some other resources you may like

Blender Shear Tool Short tutorial#2

Got stuck when creating 3D art?

Our workflow cheatsheet will help you to know what to do next! Get it by subscribing to our newsletter!