Realistic Landscape Rendering with Terragen

françois Ruty
11 min readMar 17, 2015

--

(Interview performed in March 2015 for Seekscale company, a cloud rendering startup)

In our permanent quest to understand today’s rendering landscape, we decided to focus today on realistic landscapes rendering, and interviewed Matt Fairclough, founder of Planetside Software, the company behind Terragen, one of the reference softwares to generate and render realistic landscapes. Terragen was used in high-profile movies such as Oblivion, Superman, Hunger Games… We discussed with Matt about the current state of the art, what we can expect for the near future, and, last but not least, how it’s like to bring to market a cutting edge niche renderer.

– First, how did the Terragen project start?

Terragen started as a personal project. When I was a teenager I spent a lot of time playing with graphics programs on the Amiga, and then I was introduced to a landscape generator called Vista. At the same time I was learning to program in AMOS and Blitz Basic, trying to make some simple games.

I never finished anything because there were always too many ideas I wanted to try out. Some time during all this I developed a fascination with fractal landscape algorithms, and the further I went with them, the more I wanted to calculate realistic lighting and try to make photorealistic renders of them. My interest in space exploration and the planets also played a big role. In the mid 90s these experiments started to coalesce into a single program I called ‘Terragen’, but since then it’s been rewritten from scratch at least twice.

– What is specific with landscapes, compared with other kinds of 3d models? What specific rendering algorithms implementations enable you to be competitive, for landscapes, against generalist renderers such as Vray, Mental Ray, Arnold… ?

The situation today is quite different from what it used to be. In the early 2000s, most renderers had a hard time dealing with the sheer detail that you need to render a realistic landscape. To do this with polygons, for a 2k image you need millions of them. To keep the data manageable, you need to be careful about where to put those polygons in space, so that there is enough detail close to the camera and you’re not using any more than you need in the distance.

You could model a landscape in, say, Maya, and use a renderer like PRMan to dice it into micropolygons and add detail at render time with displacement, but even with PRMan it was very important to keep the displacement to a minimum if you wanted it to render fast. The more details you put into the model, the less displacement you need, but then you have a heavier model to work with in the viewport and heavier I/O. Terragen 2 was designed to render huge displacements quickly. It assumes that the input model is very low res — such as a plane or a parametric sphere — and then everything you do after that is displacement.

These days there are renderers that can handle many millions of polygons quite easily, so in terms of pure rendering technology Terragen doesn’t have many clear advantages that it used to. But it’s not just about rendering of hard surfaces. Atmosphere and natural lighting play important roles, and Terragen excels at these things. The volumetric engine is quite fast and can render arbitrary procedural volumes, although its parameters and UI are biased towards clouds. The Terragen community is also a concentration of knowledge in the domain of landscape generation and environment building, and it’s rewarding to see people contributing more to it every day.

We’re now starting to focus on making Terragen more powerful as an asset generation tool, with high quality geometry and texture output so that you can take your environments into any other renderer you like, with whatever level of detail you can handle.

– What GI algorithms does Terragen use?

Terragen uses an irradiance cache, of sorts. This works on surfaces and in volumes. We also combine the cache with some image-space detail enhancements so that the images have some of the rich details that you’d see in a brute force ray traced solution.

Our GI caches are structured in a way that we can distribute rendering of GI caches for animations across render farms, and not have to hold up a single workstation to render the whole cache.

– On what kinds of projects do VFX studios use Terragen? (By the way I saw that Terragen was used on Serious Sam, I’m a huge fan!)

Most of the major VFX studios that I know of have licenses of Terragen, but the amount of usage varies. It’s been used for at least 25 feature films (those are only the ones we know about), many more TV commercials, game trailers and cinematics, and recently VR projects.

– Linux render nodes became available in Terragen 3. As a renderer (but not only) vendor, what are the reasons to switch from windows-only to multiplatform support? In particular, what were the technical challenges associated with the renderer?

We’ve been on Windows and Mac since about 2004. Linux support has been requested from our users in VFX for many years. We now have customers who are benefiting from rendering on hundreds of nodes that they otherwise wouldn’t have been able to. The Linux render node has paid for itself already and it doesn’t cost us much to maintain.

Most of the problems that came up when porting from Windows to the Mac and Linux for the first time were related to random numbers, and naive or incorrect use of math libraries. By examining differences between platforms and narrowing down to specific causes, we could then fix similar mistakes throughout the code. We sometimes find problems with new code when it’s run on Linux, but when an issue is spotted it’s usually pretty simple to fix.

– Your pricing depends on the output resolution. Some other software pricing involve features segmentation, or a watermark… Why did you choose this pricing structure?

Nobody likes watermarks. I think it’s bad for business to use tactics that prevent people from doing anything useful with their creations. Many of these users eventually do upgrade if they’re given the chance to discover the software at their own pace.

The Free Non-Commercial Edition does have some feature limitations, not just output resolution. And there is a string of high end production-oriented features that separate Terragen Professional from Terragen Creative. We try to be careful when choosing what features differentiate the various versions of Terragen, to reduce the collateral damage while making it compelling to choose the high end version. We don’t want to limit our products in unnecessary ways.

– What is the typical Terragen use case for business projects? Is it used for landscape generating only, or also for rendering? What software is usually associated with Terragen (upstream or downstream) in a typical pipeline?

I think in the majority of cases Terragen is used for rendering, because you get the lighting, the atmosphere, and view-dependent detail all in one package. And that’s not surprising because that’s exactly what we’ve focused on. In many of those cases the terrain is also modeled in Terragen. Some people are generating HDRs in Terragen, and when they render their characters, vehicles and buildings in mainstream renderers they’re using Terragen HDRs to light them. If they’re combining those with Terragen renders, they’ll get pretty good lighting integration right off the bat, and then just sweeten it in the composite.

At the moment, geometry exported from Terragen isn’t always robust enough for rendering in other packages, except for holdouts, shadow casting and so on. But I know of some interesting cases where terrain has been built procedurally in Terragen and exported to external renderers, including Unity3D. We’ve been told that it simply wouldn’t be possible to create terrains of this detail any other way. As I said earlier, we’re taking a new perspective on this now and want to focus on creating a robust terrain asset creation tool.

In a production with multiple roles upstream of Terragen — such as previs, animation, layout and modeling — sometimes it’s necessary to model landscapes in another application and send them to Terragen. When that happens, the best way to convert a model into a form that gets all the advantages of Terragen is to render it as a greyscale heightfield with a camera placed above the model. We’re planning to release a tool to simplify that process and perhaps remove that step altogether.

Terrains can also be created in ZBrush or Mudbox and exported as vector displacement maps that will come into Terragen to produce almost identical terrains, with overhangs and all. The main requirement is that you work with a plane (or shrink-wrap your model to a deformed plane), and then you can match the plane’s coordinates in the displacement map in Terragen.

– Can Terragen be used to render frames that might not be landscapes but share some common characteristics? (ex of urban environment)

Absolutely. Many of our users have rendered models of cars, buildings, entire cities, futuristic megalopolises and incredibly detailed space stations. Sometimes they build these things procedurally in Terragen, by using Terragen’s instancing capabilities on components that they modeled in another package.

You can add volumetrics, and trees can be instanced on imported models. I would not try to sell Terragen as the best tool for these kinds of scenes in a professional production. But there are times when you want to mix artificial structures with landscape, beautiful sky, realistic daylight, volumetric clouds etc. so it makes sense to build it all as a Terragen scene. The renderer might surprise you with how well it handles what you throw at it.

– Does it happen sometimes that studios try to add “artistic looks” in addition to photorealism? How does Terragen play out in those situations?

The lighting certainly doesn’t have to be photorealistic, and the node network is powerful enough to allow a TD to build some non-photorealistic shaders, although perhaps not ones that involve image-space processing. I don’t know of any productions that have used Terragen along these lines, though.

– Does Terragen support GPU rendering? Why and how does it impact your software development, and your business? According to you, where is industry going right now as far as GPU rendering is concerned?

Terragen doesn’t do any processing on the GPU. Most of the renderer is not architected in a way that lends itself to porting to the GPU without a complete overhaul. The components that I can see could benefit most from the GPU tend to play fairly minor roles, so the gains won’t be great if we port them. However, I’m planning on decoupling much of the displacement engine from the output render device, to make it more renderer-agnostic. This will pave the way for hooking into 3rd party renderers, which might be GPU renderers. There’s a lot more we could do with the GPU for interactive previews as long as we accept that not all render features can be previewed accurately, but it would be costly for us to do this just now.

I do think that the GPU should be carefully considered for any new projects where there is an opportunity to target it properly. But GPUs and CPUs are on converging paths. They will eventually merge, and the tools will surely become simpler to use. It might be wise for small companies like us who are heavily invested in CPU code to wait out the storm. And then after they’ve merged, some new kind of processing unit (quantum?) will come along that runs alongside the CPU, and we’ll be asking the same questions all over again. How can we port our renderers to the QPU?

– How does Terragen play out with common render farm managers? (Deadline, RoyalRender, Qube..)

Thinkbox’s Deadline supports Terragen out of the box. RoyalRender and DrQueue both claim to, but I haven’t personally tested them. Qube and others might require a small script to be written, but these usually aren’t complicated. If someone runs into difficulties settings things up with any render farm software, we’ll help them.

– In latest release, Terragen can now generate terrains from georeferenced satellite images (awesome!). Are you prospecting customers outside of entertainment industry? Several sectors are interested in VR (virtual reality), such as Military, territorial administration…

We’re not directly aiming at those sectors, but we do have some of those types of customers already.

– In terms of VR, do you have plans with the Oculus Rift? A recent Game of Thrones Oculus Rift reconstitution used Unity3D for the landscape, I guess Terragen could do the trick too.

Terragen 3.2 has an updated 360-degree spherical camera that also supports stereo. Most renderers can’t do this yet (they can do spherical, or stereo, but not both), so studios have to write their own camera shaders or other hacks to make it work. Terragen does this natively, and it works for the Oculus Rift. We can’t do realtime yet, but of course we can pre-render images or movies in 360 stereo for playback on the Oculus. Terragen was recently used to create the environment in a VR film shown at Sundance this year, called “Evolution of Verse”. In the end, the background didn’t need to use the stereo feature because it was so far away from the viewer, but more ambitious projects in future could benefit from Terragen’s ability to render stereo panoramas.

– I have a question about procedural generation, which Terragen, as other softwares, of course counts among its capabilities. On the one hand, procedural generation is automatic (which saves a lot of artist time), on the other hand it may not match the visual requirements, since it’s (pseudo) randomly generated. Is your terrain generation from satellite images feature some kind of “guided” procedural generation?

In a way, yes. Digital Elevation Maps are essentially just raster displacement maps with some metadata that allows software like Terragen to correctly scale the altitudes and to position the maps correctly in space; these maps are said to be “georeferenced”. Terragen’s heightfield shader can apply some additional fractal detail to the maps as the terrain is rendered, so in a sense that is a kind of guided procedural generation. There’s also a Fractal Warp Shader which does a similar thing to any network of shaders you want to apply it to.

So you could paint an approximate terrain map in Photoshop or sculpt it in ZBrush/Mudbox and then extend it with procedural detail. You can combine displacements in all kinds of ways in Terragen, so the satellite data might just provide overall shapes while you add whatever kinds of details you want. There have been some papers that describe more advanced algorithms for analysing features in raster images and filling in details in a context sensitive manner, but Terragen doesn’t do this.

Procedural terrain algorithms are continuing to improve. Resolution-independent procedural erosion is going to become available to Terragen users in the next year or so (maybe as a 3rd party plugin, but we also have some code in the pipeline). These algorithms apply fluvial erosion with complex branching patterns to all sorts of input terrains. Previously you’d have to do this as a separate process on raster data which has finite resolution. Erosion adds a huge amount of realism to hand-modeled terrains as well as procedurally generated terrains. And it gives us another tool to add resolution-independent detail to low detail sources such as satellite data or hand-painted elevation maps.

Originally published at fruty.io on March 17, 2015.

--

--

françois Ruty
françois Ruty

Written by françois Ruty

I'm a CTO, and I like to talk about Murphy's law

No responses yet