You're currently only viewing posts tagged "Fusion". See all posts instead.

Syntheyes Lens Distortion to 3DEqualizer

I’ve crunched a few numbers yesterday and I think I came up with a solution to convert the lens distortion coefficient calculated by Syntheyes to the one used by 3DEqualizer.

This is necessary since up until now, Fusion’s native LensDistort tool only supports the latter. Unfortunately, only the first coefficient can be used since 3DEqualizer’s formula differs for the higher order coefficient. You can download a tool script that does the calculations for you here:

LensDistort for Syntheyes.py

Here’s the math behind:

Rendering rec709 in Fusion

This is part of a series of articles dealing with color workflows in Fusion. Other parts:
Linear Gamma Workflow in Fusion Part 1 and Part 2.

I’ve noticed a couple of Google hits for this topic. Maybe you’re interested in this because Fusion seems to lack a color space selector like Nuke has in its Write node? In Fusion the workflow boils down to using the Gamut tool where rec709 is called by its official name “ITU-BT R.709”. But as with everything in compositing, it makes sense to understand what you’re actually doing since in the end the “how to output rec709” question is just a matter of giving your client what he expects.

The rec709 standard encompasses much more than just the gray-scale gamma curve. It also specifies color primaries, white point and so on which would also affect the hue of your pixels. But usually, if somebody hands you his footage, telling you it’s in “rec709” he probably means that he exported it from his black box editing thingy and he wants the result to look neither brighter nor darker. So I’m mostly dealing with the proper gamma curve for rec709 in this post, not colorimetric mambo-jumbo that I’m still figuring out myself and that mostly concerns DOPs or colorists.

Option 1:

Source footage is a supposed rec709 mov and you’re not using a linear gamma workflow. Fusion won’t do anything to footage it loads in or saves out, so your workflow will simply be this:

Your end result will look the same as your source footage (except for the effect you’ve added). rec709 in, rec709 out. sRGB in, sRGB out. If it doesn’t, then it might be because you’re writing out movs in different codecs and Quicktime does its own unpredictable color shifting. Use file sequences instead. Also, if you’re comparing Fusion’s output to something else in Nuke, make sure both Read nodes are set to the same color space! By default, image sequences are interpreted as sRGB but movs as something else.

Option 2:

Same as before, but you’re working with linear gamma:

The first gamut tool allows you to bring in footage from a different color space as well (sRGB jpegs from a camera come to mind). This is the proper workflow for compositing and also the one used in Nuke.

Note: Fusion 6.4 added a new display-referred rec709 space that is used by ARRI’s Alexa color science. The regular rec709 color space has now been renamed to ITU-R BT.709 (scene).

Option 3:

Linear CGI to rec709. First graph is for a linear workflow, the second one for compositing in rec709 directly. The latter isn’t correct, but useful if you’re afraid of this whole LUT thing…

Option 4:

Source footage is a dpx sequence in a logarithmic format (film or something new like Alexa Log C). Your client wants rec709 for reviews and editing (regardless of what the deliverable will be). You really should be working in a linear gamma workflow here:

To convert from logarithmic to linear you’ll have to set the Loader’s conversion values to those specified by the guy who made the dpx files. If you’ve received Alexa Log C there’s a special conversion that needs to be done. Fusion 6.4 supports it in the Loader and CineonLog tools. In earlier versions you have to bypass Fusion’s log-to-lin conversion that happens in the Loader. Check this forum thread for an implementation of the gamma curve that Nuke is suggesting for Log C or create your own conversion LUTs on ARRI’s site.

The result of the loader will be linear, so a Gamut tool with output space set to ITU-R BT.709 and add gamma enabled results in rec709. Alexa footage will look less saturated than what an editor might see using his ARRI LUT. To fix this, set the Gamut tool’s source color space to Alexa’s wide gamut (requires Fusion 6.4 or later).

Option 4b:

You have also been given a film LUT which the editor would have used to convert logarithmic footage to rec709 on his side. He’s doing more than just a gamma correction so if you’re not doing the same he’ll think you screwed up because saturation and highlights look different. This flow assumes that his LUT expects a logarithmic input which gets converted to rec709:

Don’t use his LUT to convert your footage to rec709 before linearizing it. You’ll lose the dynamic range you could get from the dpx. Better guess some settings for the log2lin conversion in the Loader. As long as you’re using the same values in the CineonLog tool (which goes lin to log), you’re kinda safe.

Update: Ken Turner has more on how to use LUTs in Fusion.

 

 

Linear Gamma in Fusion Part 2

In part one, I have written about viewer LUTs to aid my linear gamma workflow. This part provides an example of the complete workflow (without CGI) and talks a bit about the basics. For starters, lots of people have already explained the issue of gamma quite nicely so I’ll make it short, skip the history and relate things to Fusion.

This article is part of a series of articles dealing with color workflows in Fusion. Other parts:
Linear Gamma Workflow Part 1 and Rendering rec709.

How to setup a linear workflow in Fusion (click to enlarge) – update: you can also use the “Select View LUT Plugin” comp script to save your LUT settings

Image Editing is Math

Color corrections, image warping… everything. In math, you rely on basic assumptions, for example that 1 + 1 = 2. A more specific example that relates to VFX is this: if you take two photos with a camera and for the second take you open your shutter twice as long as before, twice the amount of light will be “recorded” and the resulting image will be twice as bright. In other words, every pixel’s value should be twice as much as before. Fusion is using algorithms that expect this to be the case – as does every other image editing and rendering software out there. They expect a pixel that is twice as bright to have twice the RGB values.

So what’s the big deal? Well, the problem is that every image that looks alright inside an image viewer, a webbrowser, Photoshop, Premiere or Fusion doesn’t obey this premise. The files all have their RGB values warped (due to reasons going back to the beginnings of computer graphics and CRT monitors) to look normal to our eyes when viewed on a monitor. But if computer programs are looking at the raw numbers, assuming that twice the RGB means twice the brightness, they are mistaken. Their algorithms result in 1+1 = 3.

Try it yourself. Increase the gain of a photo by a factor of two. All the color values will be twice as high but the image looks very blown out – not like a photo that has been taken with half the shutter speed (or twice the aperture). This is why you’ve been adding one color correction after another, tweaking curves, creating masks for highlight areas and doing all kinds of nasty stuff to an image. You’ve unknowingly been doing math in a universe where 1+1 = 3.

Gamma Correction

I haven’t used the word gamma yet. In very simple terms, it’s a measurement of how much the color values of an image have been warped and if you read up on it you’ll quickly end up with figures of curved lines, formulas and jargon. Let’s use language a compositor would use (but don’t try to impress your TD, colorist or resident math geek):

An image like the first one to the right that looks good on your monitor by default, i.e. one with warped RGB values, has “gamma”. You need to “remove gamma” or apply a “gamma correction” using a BrightnessContrast or Gamut tool to bring the values back into a dimension where the image is “linear”, i.e. where 1+1=2. Fusion’s algorithms are happy now, your color corrections and defocus effects look much more realistic but oh my… the image is way too dark. But nevertheless, in this dark image, an object that has twice the RGB values really has emitted twice the amount of light towards the camera. Again, this is simplified since digital stills cameras are made to produce visually pleasing jpegs, not physically correct ones.

At the end of your comp, you’ll have to “add gamma” or convert the image back into a “color space” that is suitable for your target audience who will need to see the image with warped RGB values so it appears correctly to human eyes.

Working Linear by Example

The workflow problem that arises is obvious: How do you work on an image if what is being displayed is unnaturally dark? Enter the LUT button. A LUT is a color correction that is applied on-the-fly, just for the viewers, usually in real-time on your GPU. It doesn’t modify the image data of your comp in any way. There are basically two gamma corrections that make sense for a basic linear workflow: sRGB and ITU-R BT.709 (rec709). Both brighten up a linear image for human consumption and differ slightly in perceived brightness (or rather “gamma”).

You should use the same LUT that was used to linearize your footage. If there are no specifications from your grading or editing department, you can’t go wrong with sRGB – although rec709 makes the image appear a bit darker and will prevent you from crushing the blacks too much. But unless you’re working on a calibrated broadcast monitor and know exactly what your editing or grading department and broadcast station does to your images, neither LUT will give you certainty about how it will look to other people. If your task is simply to add something to an otherwise unchanged plate it also doesn’t matter which LUT you’re using as long as you’ve linearized your footage and you’re working in floating point.

Here’s an example comp that demonstrates a feasible linear workflow. It uses a photograph to demonstrate how a defocus produces photorealistic results when applied with linear gamma. It also includes the ViewShaders from my previous post. The next part of this series will discuss CGI and “illegal” operations in linear space.

Linear Gamma Workflow in Fusion

Of course you can work in linear gamma in Fusion. In fact, Fusion expects images to be linear. If you simply load an sRGB image and start working on it, you’re doing it wrong.

This is part of a series of articles dealing with color workflows in Fusion. Other parts:
Linear Gamma Workflow Part 2 and Rendering rec709.

The Problem

Now, it’s been two years since Nuke has set the industry standard for an intuitive linear compositing workflow and Fusion has done nothing to keep up. (update: it’s 2014 so it’s 4 years now and Fusion 7 hasn’t improved the situation…) Why do people still ask in forums if Fusion can work linear or how to set up an sRGB LUT? Because while Fusion’s LUT engine is much more powerful than Nuke’s, its corresponding interface leaves a lot to be desired (to put it politely). While in Nuke you have to consciously do things to break its linear workflow, you have to know what you’re doing to work linear in Fusion. And even then, you’re gonna trip over its LUT GUI all the time. There’s probably a reason why even in eyeon’s official YouTube videos the LUT button is disabled.

The Solution

So here’s the deal. You want to view your images in sRGB or rec709 color space, you want to have access to a gamma slider to quickly check your black levels and all of this should be saved in a convenient default for all your comps.

  1. Download these GPU-accelerated viewshaders: srgb_rec709_viewshaders.zip
  2. Unzip them into your Fuses directory, usually something like C:\Users\Public\Documents\eyeon\Fusion\Fuses\
  3. Restart Fusion and click the triangle next to a viewer’s LUT button. Choose sRGB or rec709.
  4. Enable the LUT by pressing the LUT button.
  5. Right-click a viewer, then select Settings… -> Save Defaults

Now all you need to do is convert all non-linear images by adding a Gamut tool after the Loader, set its source color space (if unsure, choose sRGB) and make sure the Remove Gamma option is checked (which it should be by default). Done. Oh, the footage should be in float16 or float32 of course. If you’re loading jpegs, make sure the Loader is converting this for you.

You can choose LUT -> Edit… to bring up gain and gamma controls like in Nuke. These settings will be forgotten once you switch to another LUT but they are meant for temporary image checks anyways, so that’s not much of a problem. (update: Fusion 7 or later finally remembers these LUT sliders)

These GUI issues aside, Fusion actually provides an excellent API for writing GPU-accelerated tools. As these viewshaders demonstrate, you can use the CG shader language inside a LUA script (.fuse files). Here are two additional examples that I whipped up in about an hour:

Fusion also allows you to use OpenCL inside LUA which enables you to write custom tools that execute at lightning speed on your graphics card and are compiled on the fly.

A lot of example code comes with Fusion or can be found on vfxpedia. The guys at Anatomical Travelogue have also published the source of some of their plugins and I’m currently putting the finishing touches onto a GPU-accelerated corner pin plugin, the source of which can be found on the pigsfly forum. It took me a couple of days to write, but most of the time wasn’t spent on fiddling with the API or with OpenCL but trying to get accustomed to the matrix math involved.

Appendix

Just to complete this topic, here’s the way you’d set up an sRGB LUT in Fusion without my CG shaders. You need to activate the Gamut View LUT, choose LUT -> Edit and set its output to sRGB and “Add Gamma”. If you switch to another LUT and back to Gamut View, you’ll have to repeat these steps. Bummer. Moreover, this won’t give you a convenient gain/gamma slider like Nuke has.

For this, you would have to activate the GainGamma Fuse first. Then, in the viewer’s context menu go to LUT… -> Add New -> Gamut View LUT. You can now save this combination of shaders as a LUT chain in the context menu’s LUT… section or as a default for the viewport by choosing Settings… -> Save Defaults. You can do the same for rec709 and toggle between both setups using the context menu. The LUT button’s popup menu won’t help you anymore.

Now download those shaders instead and spare yourself the trouble 🙂

Nuke’s Smooth Ramp Functions

Nuke’s Ramp node can produce linear and smooth gradients. Here are its formulas. I have reverse-engineered them by trial and error after reading up on interpolation formulas like smoothstep (nicely summed up on this website).

In these formulas, “x” denotes a value from 0 to 1. The result falls into the [0-1] range as well and needs to be scaled by the desired end color if you want an RGB ramp.

// linear
y = x
// plinear: perceptually linear in rec709
y = pow(x, 3)
// smooth: traditional smoothstep
y = x*x*(3 - 2*x)
// smooth0: Catmull-Rom spline, smooth start, linear end
y = x*x*(2 - x)
// smooth1: Catmull-Rom spline, linear start, smooth end
y = x*(1 + x*(1 - x))

Here’s a ramp macro for Fusion which allows you to draw ramps directly onto an image like in Nuke. Fusion’s own BG tool is of course much more flexible, but it requires you to merge its gradient manually and it has no easy switch for smoothstep gradients.

Ramp_v01.setting

Compositing Tutorial Part 2

Here’s part 2 of my compositing tutorial series. It’s still about tweaking the camera move of our turntable shot. This time, I’ll demonstrate some expressions to retime one camera move based on the angle of another one so the projection’s distortion is minimized. You can grab the comps and footage on my tutorial page.

Using these techniques, you can integrate matchmoved footage into modified camera moves. This may be necessary because there were problems with the camera move on set (bumpy, wrong pacing, …), the director changes his mind or the intended camera move wouldn’t have been possible in real life.

I have employed this technique on the EXPO project. The scene can also be found at the end of the tutorial video. The actor was shot on a turntable and the camera move went like this:

  • pull camera away from actor
  • rotate actor by 90 degrees
  • dolly in

Of course this makes for a really crappy camera move if you add a CG environment since it’ll look nothing like what a real camera crane operator would perform. The final shot was designed in 3D so the plate that was shot had to be integrated somehow. I had to cut out a piece of the original plate, luckily for us the actor was sitting still. After a bit of additional 2D stabilization (the matchmove wasn’t that exact apparently) the plate matched perfectly.

Python & Fusion Video Tutorial

I’ve finished another video tutorial for Fusion. It covers Python scripting and takes a look at the object oriented model of controlling Fusion’s tools, querying their attributes and setting keyframes.

Download the script that’s shown in the video.