Siggraph Presentation

This guide will be officially introduced at Siggraph 2023 - Houdini Hive on Wednesday, 9. of August 2023 at 11:00 AM PST.

Animation/Time Varying Data

Usd encodes time related data in a very simple format:

{
    <frame>: <value>
}

Table of Contents

  1. Animation/Time Varying Data In-A-Nutshell
  2. What should I use it for?
  3. Resources
  4. Overview
    1. Time Code
    2. Layer Offset (A Non-Animateable Time Offset/Scale for Composition Arcs)
    3. Reading & Writing default values, time samples and value blocks
    4. Time Metrics (Frames Per Second & Frame Range)
    5. Motionblur - Computing Velocities and Accelerations
    6. Stitching/Combining time samples
    7. Value Clips (Loading time samples from multiple files)

TL;DR - Animation/Time Varying Data In-A-Nutshell

Tip

  • Terminology: A single time/value pair is called time sample, if an attribute doesn't have time samples, it has default value (Which just means it has a single static value).
  • Time samples are encoded in a simple {<time(frame)>: <value>} dict.
  • If a frame is requested where no time samples exist, it will be interpolated if the data type allows it and non changing array lengths in neighbour time samples exist. Value queries before/after the first/last time sample will be clamped to these time samples.
  • Time samples are encoded unitless/per frame and not in time units. This means they have to be shifted depending on the current frames per second.
  • Only attributes can carry time sample data. (So composition arcs can not be time animated, only offset/scaled (see the LayerOffset section on this page)).

Reading and writing is quite straight forward:

from pxr import Sdf, Usd
### High Level ###
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/bicycle")
prim = stage.DefinePrim(prim_path, "Cube")
size_attr = prim.GetAttribute("size")
for frame in range(1001, 1005):
    time_code = Usd.TimeCode(float(frame - 1001))
    # .Set() takes args in the .Set(<value>, <frame>) format
    size_attr.Set(frame, time_code)
print(size_attr.Get(1005)) # Returns: 4

### Low Level ###
from pxr import Sdf
layer = Sdf.Layer.CreateAnonymous()
prim_path = Sdf.Path("/bicycle")
prim_spec = Sdf.CreatePrimInLayer(layer, prim_path)
prim_spec.specifier = Sdf.SpecifierDef
prim_spec.typeName = "Cube"
attr_spec = Sdf.AttributeSpec(prim_spec, "size", Sdf.ValueTypeNames.Double)
for frame in range(1001, 1005):
    value = float(frame - 1001)
    # .SetTimeSample() takes args in the .SetTimeSample(<path>, <frame>, <value>) format
    layer.SetTimeSample(attr_spec.path, frame, value)
print(layer.QueryTimeSample(attr_spec.path, 1005)) # Returns: 4

What should I use it for?

Tip

Anything that has time varying data will be written as time samples. Usually DCCs handle the time sample creation for you, but there are situations where we need to write them ourselves. For example if you want to efficiently combine time samples from two different value sources or if we want to turn a time sample into a default sampled value (a value without animation).

Resources

Overview

Important

A single frame(time)/value pair is called time sample, if an attribute doesn't have time samples, it has default value (Which just means it has a single static value). The time value is the active frame where data is being exported on. It is not time/FPS based. This means that depending on frames per second set on the stage, different time samples are read. This means if you have a cache that was written in 25 FPS, you'll have to shift/scale the time samples if you want to use the cache in 24 FPS and you want to have the same result. More about this in the examples below.

Danger

Currently USD has no concept of animation curves or multi time value interpolation other than linear. This means that in DCCs you'll have to grab the time data around your frame and have the DCCs re-interpolate the data. In Houdini this can be easily done via a Retime SOP node.

The possible stage interpolation types are:

  • Usd.InterpolationTypeLinear (Interpolate linearly (if array length doesn't change and data type allows it))
  • Usd.InterpolationTypeHeld (Hold until the next time sample)

These can be set via stage.SetInterpolationType(<Token>). Value queries before/after the first/last time sample will be clamped to these time samples.

Render delegates access the time samples within the shutter open close values when motion blur is enabled. When they request a value from Usd and a time sample is not found at the exact time/frame that is requested, it will be linearly interpolated if the array length doesn't change and the data type allows it.

Danger

Since value lookups in USD can only have one value source, you cannot combine time samples from different layers at run time, instead you'll have to re-write the combined values across the whole frame range. This may seem unnecessary, but since USD is a cache based format and to keep USD performant, value source lookup is only done once and then querying the data is fast. This is the mechanism what makes USD able to expand and load large hierarchies with ease.

For example:

# We usually want to write time samples in the shutter open/close range of the camera times the sample count of deform/xform motion blur.
double size.timeSamples = {
    1001: 2,
    1002: 2.274348497390747,
    1003: 3.0096023082733154,
    1004: 4.0740742683410645,
    1005: 5.336076736450195,
    1006: 6.663923263549805,
    1007: 7.9259257316589355,
    1008: 8.990397453308105,
    1009: 9.725651741027832,
    1010: 10,
}
# If we omit time samples, value requests for frames in between will get a linearly interpolated result.
double scale.timeSamples = {
    1001: 2,
    1005: 5.336076736450195,
    1010: 10,
}

Warning

Since an attribute can only have a single value source, we can't have a default value from layer A and time samples from layer B. We can however have default and time samples values from the same value source.

For example:

def Cube "Cube" (
)
{
    double size = 15
    double size.timeSamples = {
        1001: 1,
        1010: 10,
    }
}

If we now request the value without a frame, it will return 15, if we request it with a time, then it will linearly interpolate or given the time sample if it exists on the frame.

...
size_attr.Get() # Returns: 15.0
size_attr.Get(1008) # Returns: 8.0
...

Usually we'll only have one or the other, it is quite common to run into this at some point though. So when you query data, you should always use .Get(<frame>) as this also works when you don't have time samples. It will then just return the default value. Or you check for time samples, which in some cases can be quite expensive. We'll look at some more examples on this, especially with Houdini and causing node time dependencies in our Houdini section.

Time Code

The Usd.TimeCode class is a small wrapper class for handling time encoding. Currently it does nothing more that storing if it is a default time code or a time/frame time code with a specific frame. In the future it may get the concept of encoding in time instead of frame, so to future proof your code, you should always use this class instead of setting a time value directly.

from pxr import Sdf, Usd
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/bicycle")
prim = stage.DefinePrim(prim_path, "Cube")
size_attr = prim.GetAttribute("size")
## Set default value
time_code = Usd.TimeCode.Default()
size_attr.Set(10, time_code)
# Or:
size_attr.Set(10) # The default is to set `default` (non-per-frame) data.
## Set per frame value
for frame in range(1001, 1005):
    time_code = Usd.TimeCode(frame)
    size_attr.Set(frame, time_code)
# Or
# As with Sdf.Path implicit casting from strings in a lot of places in the USD API,
# the time code is implicitly casted from a Python float. 
# It is recommended to do the above, to be more future proof of 
# potentially encoding time unit based samples.
for frame in range(1001, 1005):
    size_attr.Set(frame, frame)
## Other than that the TimeCode class only has a via Is/Get methods of interest:
size_attr.IsDefault() # Returns: True if no time value was given
size_attr.IsNumeric() # Returns: True if not IsDefault()
size_attr.GetValue() # Returns: The time value (if not IsDefault()

Layer Offset (A Non-Animateable Time Offset/Scale for Composition Arcs)

The Sdf.LayerOffset is used for encoding a time offset and scale for composition arcs.

Warning

The offset and scale cannot be animated.

Following composition arcs can use it:

  • Sublayers
  • Payloads
  • References (Internal & file based)

The Python exposed LayerOffsets are always read-only copies, so you can't modify them in-place. Instead you have to create new ones and re-write the arc/assign the new layer offset.

from pxr import Sdf, Usd
# The Sdf.LayerOffset(<offset>, <scale>) class has 
# no attributes/methods other than LayerOffset.offset & LayerOffset.scale.
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/animal")
root_layer = stage.GetRootLayer()
## For sublayering via Python, we first need to sublayer, then edit offset.
# In Houdini we can't due this directly due to Houdini's stage handling system.
file_path = "/opt/hfs19.5/houdini/usd/assets/pig/pig.usd"
root_layer.subLayerPaths.append(file_path)
print(root_layer.subLayerPaths)
print(root_layer.subLayerOffsets)
# Since layer offsets are read only, we need to assign it to a new one in-place.
# !DANGER! Due to how it is exposed to Python, we can't assign a whole array with the
# new offsets, instead we can only swap individual elements in the array, so that the
# array pointer is kept intact.
root_layer.subLayerOffsets[0] = Sdf.LayerOffset(25, 1) 
## For references
ref = Sdf.Reference(file_path, "/pig", Sdf.LayerOffset(25, 1))
prim = stage.DefinePrim(prim_path)
ref_API = prim.GetReferences()
ref_API.AddReference(ref)
ref = Sdf.Reference("", "/animal", Sdf.LayerOffset(50, 1))
internal_prim = stage.DefinePrim(prim_path.ReplaceName("internal"))
ref_API = internal_prim.GetReferences()
ref_API.AddReference(ref)
## For payloads
payload = Sdf.Payload(file_path, "/pig", Sdf.LayerOffset(25, 1))
prim = stage.DefinePrim(prim_path)
payload_API = prim.GetPayloads()
payload_API.AddPayload(payload)

If you are interested on how to author composition in the low level API, checkout our composition section.

Reading & Writing default values, time samples and value blocks

Writing data

Here are the high and low level APIs to write data.

Tip

When writing a large amount of samples, you should use the low level API as it is a lot faster.

from pxr import Sdf, Usd
### High Level ###
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/bicycle")
prim = stage.DefinePrim(prim_path, "Cube")
size_attr = prim.GetAttribute("size")
## Set default value
time_code = Usd.TimeCode.Default()
size_attr.Set(10, time_code)
# Or:
size_attr.Set(10) # The default is to set `default` (non-per-frame) data.
## Set per frame value
for frame in range(1001, 1005):
    value = float(frame - 1001)
    time_code = Usd.TimeCode(frame)
    size_attr.Set(value, time_code)
# Clear default value
size_attr.ClearDefault(1001)
# Remove a time sample
size_attr.ClearAtTime(1001)

### Low Level ###
from pxr import Sdf
layer = Sdf.Layer.CreateAnonymous()
prim_path = Sdf.Path("/bicycle")
prim_spec = Sdf.CreatePrimInLayer(layer, prim_path)
prim_spec.specifier = Sdf.SpecifierDef
prim_spec.typeName = "Cube"
attr_spec = Sdf.AttributeSpec(prim_spec, "size", Sdf.ValueTypeNames.Double)
## Set default value
attr_spec.default = 10
## Set per frame value
for frame in range(1001, 1005):
    value = float(frame - 1001)
    layer.SetTimeSample(attr_spec.path, frame, value)
# Clear default value
attr_spec.ClearDefaultValue()
# Remove a time sample
layer.EraseTimeSample(attr_spec.path, 1001)

If you are not sure if a schema attribute can have time samples, you can get the variability hint. This is only a hint, it is up to you to not write time samples. In some parts of Usd things will fail or not work as exepcted if you write time samples for a non varying attribute.

attr.GetMetadata("variability") == Sdf.VariabilityVarying
attr.GetMetadata("variability") == Sdf.VariabilityUniform 

Reading data

To read data we recommend using the high level API. That way you can also request data from value clipped (per frame loaded Usd) files. The only case where reading directly in the low level API make sense is when you need to open a on disk layer and need to tweak the time samples. Check out our FX Houdini section for a practical example.

Tip

If you need to check if an attribute is time sampled, run the following:

# !Danger! For value clipped (per frame loaded layers),
# this will look into all layers, which is quite expensive.
print(size_attr.GetNumTimeSamples())
# You should rather use:
# This does a check for time sample found > 2.
# So it stops looking for more samples after the second sample.
print(size_attr.ValueMightBeTimeVarying())

If you know the whole layer is in memory, then running GetNumTimeSamples() is fine, as it doesn't have t open any files.

from pxr import Gf, Sdf, Usd
### High Level ###
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/bicycle")
prim = stage.DefinePrim(prim_path, "Cube")
size_attr = prim.GetAttribute("size")
size_attr.Set(10) 
for frame in range(1001, 1005):
    time_code = Usd.TimeCode(frame)
    size_attr.Set(frame-1001, time_code)
# Query the default value (must be same value source aka layer as the time samples).
print(size_attr.Get()) # Returns: 10
# Query the animation time samples
for time_sample in size_attr.GetTimeSamples():
    print(size_attr.Get(time_sample))
# Returns:
"""
0.0, 1.0, 2.0, 3.0
"""
# Other important time sample methods:
# !Danger! For value clipped (per frame loaded layers),
# this will look into all layers, which is quite expensive.
print(size_attr.GetNumTimeSamples()) # Returns: 4
# You should rather use:
# This does a check for time sample found > 2.
# So it stops looking for more samples after the second sample.
print(size_attr.ValueMightBeTimeVarying()) # Returns: True
## We can also query what the closest time sample to a frame:
print(size_attr.GetBracketingTimeSamples(1003.3)) 
# Returns: (<Found sample>, <lower closest sample>, <upper closest sample>)
(True, 1003.0, 1004.0)
## We can also query time samples in a range. This is useful if we only want to lookup and copy
# a certain range, for example in a pre-render script.
print(size_attr.GetTimeSamplesInInterval(Gf.Interval(1001, 1003))) 
# Returns: [1001.0, 1002.0, 1003.0]


### Low Level ###
from pxr import Sdf
layer = Sdf.Layer.CreateAnonymous()
prim_path = Sdf.Path("/bicycle")
prim_spec = Sdf.CreatePrimInLayer(layer, prim_path)
prim_spec.specifier = Sdf.SpecifierDef
prim_spec.typeName = "Cube"
attr_spec = Sdf.AttributeSpec(prim_spec, "size", Sdf.ValueTypeNames.Double)
attr_spec.default = 10
for frame in range(1001, 1005):
    value = float(frame - 1001)
    layer.SetTimeSample(attr_spec.path, frame, value)
# Query the default value
print(attr_spec.default) # Returns: 10
# Query the animation time samples
time_sample_count = layer.GetNumTimeSamplesForPath(attr_spec.path)
for time_sample in layer.ListTimeSamplesForPath(attr_spec.path):
    print(layer.QueryTimeSample(attr_spec.path, time_sample))
# Returns:
"""
0.0, 1.0, 2.0, 3.0
"""
## We can also query what the closest time sample is to a frame:
print(layer.GetBracketingTimeSamplesForPath(attr_spec.path, 1003.3)) 
# Returns: (<Found sample>, <lower closest sample>, <upper closest sample>)
(True, 1003.0, 1004.0)

Special Values

You can also tell a time sample to block a value. Blocking means that the attribute at that frame will act as if it doesn't have any value written ("Not authored" in USD speak) to stage queries and render delegates.

from pxr import Sdf, Usd
### High Level ###
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/bicycle")
prim = stage.DefinePrim(prim_path, "Cube")
size_attr = prim.GetAttribute("size")
for frame in range(1001, 1005):
    time_code = Usd.TimeCode(frame)
    size_attr.Set(frame - 1001, time_code)
## Value Blocking
size_attr.Set(1001, Sdf.ValueBlock())

### Low Level ###
from pxr import Sdf
layer = Sdf.Layer.CreateAnonymous()
prim_path = Sdf.Path("/bicycle")
prim_spec = Sdf.CreatePrimInLayer(layer, prim_path)
prim_spec.specifier = Sdf.SpecifierDef
prim_spec.typeName = "Cube"
attr_spec = Sdf.AttributeSpec(prim_spec, "size", Sdf.ValueTypeNames.Double)
for frame in range(1001, 1005):
    value = float(frame - 1001)
    layer.SetTimeSample(attr_spec.path, frame, value)

## Value Blocking
layer.SetTimeSample(attr_spec.path, 1001, Sdf.ValueBlock())

Time Metrics (Frames Per Second & Frame Range)

With what FPS the samples are interpreted is defined by the timeCodesPerSecond/framesPerSecond metadata.

When working with stages, we have the following loading order of FPS:

  1. timeCodesPerSecond from session layer
  2. timeCodesPerSecond from root layer
  3. framesPerSecond from session layer
  4. framesPerSecond from root layer
  5. fallback value of 24

These should match the FPS settings of your DCC. The 'framesPerSecond' is intended to be a hint for playback engines (e.g. your DCC/Usdview etc.) to set the FPS to when reading your file. The 'timeCodesPerSecond' describes the actual time sample intent. With the fallback behavior we can also only specify the 'framesPerSecond' to keep both metadata entries in sync.

When working with layers, we have the following loading order of FPS:

  1. timeCodesPerSecond of layer
  2. framesPerSecond of layer
  3. fallback value of 24

Info

When loading samples from a sublayered/referenced or payloaded file, USD automatically uses the above mentioned metadata in the layer as a frame of reference of how to bring in the time samples. If the FPS settings mismatch it will automatically scale the time samples to match our stage FPS settings as mentioned above.

Therefore when writing layers, we should always write these layer metrics, so that we know what the original intended FPS were and our caches work FPS independently.

Warning

In VFX we often work starting from 1001 regardless of the FPS as it is easier for certain departments like FX to have pre-cache frames to init their sims as well as it also makes it easier to write frame based expressions. That means that when working with both 25 and 24 FPS caches, we have to adjust the offset of the incoming cache.

Let's say we have an anim in 25 FPS starting off at 1001 that we want to bring into a 24 FPS scene. USD as mentioned above handles the scaling for us based on the metadata, but since we still want to start at 1001, we have to offset based on "frame_start * (stage_fps/layer_fps) - frame_start". See the below commented out code for a live example. That way we now have the same 25 FPS cache running in 24 FPS from the same "starting pivot" frame. If we work fully time based, we don't have this problem, as animation in the 25 FPS cache would have its time samples written at larger frames than the 24 FPS cache and USD's scaling would auto correct it.

(
    timeCodesPerSecond = 24
    framesPerSecond = 24
    metersPerUnit = 1
    startTimeCode = 1001
    endTimeCode = 1010
)

The startTimeCode and endTimeCode entries give intent hints on what the (useful) frame range of the USD file is. Applications can use this to automatically set the frame range of the stage when opening a USD file or use it as the scaling pivot when calculating time offsets or creating loop-able caches via value clips.

from pxr import Sdf, Usd
### High Level ###
stage = Usd.Stage.CreateInMemory()
prim_path = Sdf.Path("/bicycle")
prim = stage.DefinePrim(prim_path, "Cube")
size_attr = prim.GetAttribute("size")
for frame in range(1001, 1005):
    time_code = Usd.TimeCode(frame)
    size_attr.Set(frame - 1001, time_code)
# FPS Metadata
stage.SetTimeCodesPerSecond(25)
stage.SetFramesPerSecond(25)
stage.SetStartTimeCode(1001)
stage.SetEndTimeCode(1005)

### Low Level ###
from pxr import Sdf
layer = Sdf.Layer.CreateAnonymous()
prim_path = Sdf.Path("/bicycle")
prim_spec = Sdf.CreatePrimInLayer(layer, prim_path)
prim_spec.specifier = Sdf.SpecifierDef
prim_spec.typeName = "Cube"
attr_spec = Sdf.AttributeSpec(prim_spec, "size", Sdf.ValueTypeNames.Double)
for frame in range(1001, 1005):
    value = float(frame - 1001)
    layer.SetTimeSample(attr_spec.path, frame, value)
# FPS Metadata
time_samples = Sdf.Layer.ListAllTimeSamples(layer)
layer.timeCodesPerSecond = 25
layer.framesPerSecond = 25
layer.startTimeCode = time_samples[0]
layer.endTimeCode = time_samples[-1]

###### Stage vs Layer TimeSample Scaling ######
from pxr import Sdf, Usd

layer_fps = 25
layer_identifier = "ref_layer.usd"
stage_fps = 24
stage_identifier = "root_layer.usd"
frame_start = 1001
frame_end = 1025

# Create layer
reference_layer = Sdf.Layer.CreateAnonymous()
prim_path = Sdf.Path("/bicycle")
prim_spec = Sdf.CreatePrimInLayer(reference_layer, prim_path)
prim_spec.specifier = Sdf.SpecifierDef
prim_spec.typeName = "Cube"
attr_spec = Sdf.AttributeSpec(prim_spec, "size", Sdf.ValueTypeNames.Double)
for frame in range(frame_start, frame_end + 1):
    value = float(frame - frame_start) + 1
    # If we work correctly in seconds everything works as expected.
    reference_layer.SetTimeSample(attr_spec.path, frame * (layer_fps/stage_fps), value)
    # In VFX we often work frame based starting of at 1001 regardless of the FPS.
    # If we then load the 25 FPS in 24 FPS, USD applies the correct scaling, but we have
    # to apply the correct offset to our "custom" start frame.
    # reference_layer.SetTimeSample(attr_spec.path, frame, value)
# FPS Metadata
time_samples = Sdf.Layer.ListAllTimeSamples(reference_layer)
reference_layer.timeCodesPerSecond = layer_fps
reference_layer.framesPerSecond = layer_fps
reference_layer.startTimeCode = time_samples[0]
reference_layer.endTimeCode = time_samples[-1]
# reference_layer.Export(layer_identifier)

# Create stage
stage = Usd.Stage.CreateInMemory()
# If we work correctly in seconds everything works as expected.
reference_layer_offset = Sdf.LayerOffset(0, 1)
# In VFX we often work frame based starting of at 1001.
# If we then load the 25 FPS in 24 FPS, USD applies the correct scaling, but we have
# to apply the correct offset to our "custom" start frame.
# reference_layer_offset = Sdf.LayerOffset(frame_start * (stage_fps/layer_fps) - frame_start, 1)
reference = Sdf.Reference(reference_layer.identifier, "/bicycle", reference_layer_offset)
bicycle_prim_path = Sdf.Path("/bicycle")
bicycle_prim = stage.DefinePrim(bicycle_prim_path)
references_api = bicycle_prim.GetReferences()
references_api.AddReference(reference, position=Usd.ListPositionFrontOfAppendList)
# FPS Metadata (In Houdini we can't set this via python, use a 'configure layer' node instead.)
stage.SetTimeCodesPerSecond(stage_fps)
stage.SetFramesPerSecond(stage_fps)
stage.SetStartTimeCode(1001)
stage.SetEndTimeCode(1005)
# stage.Export(stage_identifier)

Motion Blur - Computing Velocities and Accelerations

Motion blur is computed by the hydra delegate of your choice using either the interpolated position data or by making use of velocity/acceleration data. Depending on the image-able schema, the attribute namings slightly differ, e.g. for meshes the names are 'UsdGeom.Tokens.points', 'UsdGeom.Tokens.velocities', 'UsdGeom.Tokens.accelerations'. Check the specific schema for the property names.

Warning

Depending on the delegate, you will likely have to set specific primvars that control the sample rate of the position/acceleration data.

We can also easily derive velocities/accelerations from position data, if our point count doesn't change:

import numpy as np

from pxr import Sdf, Usd, UsdGeom


MOTION_ATTRIBUTE_NAMES_BY_TYPE_NAME = {
    UsdGeom.Tokens.Mesh: (UsdGeom.Tokens.points, UsdGeom.Tokens.velocities, UsdGeom.Tokens.accelerations),
    UsdGeom.Tokens.Points: (UsdGeom.Tokens.points, UsdGeom.Tokens.velocities, UsdGeom.Tokens.accelerations),
    UsdGeom.Tokens.BasisCurves: (UsdGeom.Tokens.points, UsdGeom.Tokens.velocities, UsdGeom.Tokens.accelerations),
    UsdGeom.Tokens.PointInstancer: (UsdGeom.Tokens.positions, UsdGeom.Tokens.velocities, UsdGeom.Tokens.accelerations)
}
# To lookup schema specific names
# schema_registry = Usd.SchemaRegistry()
# schema = schema_registry.FindConcretePrimDefinition("Mesh")
# print(schema.GetPropertyNames())

def compute_time_derivative(layer, prim_spec, attr_name, ref_attr_name, time_code_inc, multiplier=1.0):
    ref_attr_spec = prim_spec.attributes.get(ref_attr_name)
    if not ref_attr_spec:
        return
    attr_spec = prim_spec.attributes.get(attr_name)
    if attr_spec:
        return
    time_codes = layer.ListTimeSamplesForPath(ref_attr_spec.path)
    if len(time_codes) == 1:
        return
    center_time_codes = {idx: t for idx, t in enumerate(time_codes) if int(t) == t}
    if not center_time_codes:
        return
    attr_spec = Sdf.AttributeSpec(prim_spec, attr_name, Sdf.ValueTypeNames.Vector3fArray)
    time_code_count = len(time_codes)
    for time_code_idx, time_code in center_time_codes.items():
        if time_code_idx == 0:
            time_code_prev = time_code
            time_code_next = time_codes[time_code_idx+1]
        elif time_code_idx == time_code_count - 1:
            time_code_prev = time_codes[time_code_idx-1]
            time_code_next = time_code
        else:
            time_code_prev = time_codes[time_code_idx-1]
            time_code_next = time_codes[time_code_idx+1]
        time_interval_scale = 1.0/(time_code_next - time_code_prev)
        ref_prev = layer.QueryTimeSample(ref_attr_spec.path, time_code_prev)
        ref_next = layer.QueryTimeSample(ref_attr_spec.path, time_code_next)
        if not ref_prev or not ref_next:
            continue
        if len(ref_prev) != len(ref_next):
            continue
        ref_prev = np.array(ref_prev)
        ref_next = np.array(ref_next)
        value = ((ref_next - ref_prev) * time_interval_scale) / (time_code_inc * 2.0)
        layer.SetTimeSample(attr_spec.path, time_code, value * multiplier)

def compute_velocities(layer, prim_spec, time_code_fps, multiplier=1.0):
    # Time Code
    time_code_inc = 1.0/time_code_fps
    prim_type_name = prim_spec.typeName
    if prim_type_name:
        # Defined prim type name
        attr_type_names = MOTION_ATTRIBUTE_NAMES_BY_TYPE_NAME.get(prim_type_name)
        if not attr_type_names:
            return
        pos_attr_name, vel_attr_name, _ = attr_type_names
    else:
        # Fallback
        pos_attr_name, vel_attr_name, _ = MOTION_ATTRIBUTE_NAMES_BY_TYPE_NAME[UsdGeom.Tokens.Mesh]
    pos_attr_spec = prim_spec.attributes.get(pos_attr_name)
    if not pos_attr_spec:
        return
    # Velocities
    compute_time_derivative(layer,
                            prim_spec,
                            vel_attr_name,
                            pos_attr_name,
                            time_code_inc, 
                            multiplier)
    
def compute_accelerations(layer, prim_spec, time_code_fps, multiplier=1.0):
    # Time Code
    time_code_inc = 1.0/time_code_fps
    prim_type_name = prim_spec.typeName
    if prim_type_name:
        # Defined prim type name
        attr_type_names = MOTION_ATTRIBUTE_NAMES_BY_TYPE_NAME.get(prim_type_name)
        if not attr_type_names:
            return
        _, vel_attr_name, accel_attr_name = attr_type_names
    else:
        # Fallback
        _, vel_attr_name, accel_attr_name = MOTION_ATTRIBUTE_NAMES_BY_TYPE_NAME[UsdGeom.Tokens.Mesh]
    vel_attr_spec = prim_spec.attributes.get(vel_attr_name)
    if not vel_attr_spec:
        return
    # Acceleration
    compute_time_derivative(layer,
                            prim_spec,
                            accel_attr_name,
                            vel_attr_name,
                            time_code_inc, 
                            multiplier)

### Run this on a layer with time samples ###
layer = Sdf.Layer.CreateAnonymous()
time_code_fps = layer.timeCodesPerSecond or 24.0
multiplier = 5

def traversal_kernel(path):
    if not path.IsPrimPath():
        return
    prim_spec = layer.GetPrimAtPath(path)
    compute_velocities(layer, prim_spec, time_code_fps, multiplier)
    compute_accelerations(layer, prim_spec, time_code_fps, multiplier)

with Sdf.ChangeBlock(): 
    layer.Traverse(layer.pseudoRoot.path, traversal_kernel)

You can find a interactive Houdini demo of this in our Houdini - Motion Blur section.

Stitching/Combining time samples

When working with Usd in DCCs, we often have a large amount of data that needs to be exported per frame. To speed this up, a common practice is to have a render farm, where multiple machines render out different frame ranges of scene. The result then needs to be combined into a single file or loaded via value clips for heavy data (as described in the next section below).

Tip

Stitching multiple files to a single file is usually used for small per frame USD files. If you have large (1 GB > ) files per frame, then we recommend using values clips. During stitching all data has to be loaded into memory, so your RAM specs have to be high enough to handle all the files combined.

A typical production use case we use it for, is rendering out the render USD files per frame and then stitching these, as these are usually a few mb per frame at most.

Warning

When working with collections, make sure that they are not to big, by selecting parent prims where possible. Currently USD stitches target path lists a bit inefficiently, which will result in your stitching either not going through at all or taking forever. See our collections section for more details.

USD ships with a standalone usdstitch commandline tool, which is a small Python wrapper around the UsdUtils.StitchLayers() function. You can read more it in our standalone tools section.

In Houdini you can find it in the $HFS/binfolder, e.g. /opt/hfs19.5/bin.

Here is an excerpt:

...
openedFiles = [Sdf.Layer.FindOrOpen(fname) for fname in results.usdFiles]
... 
# the extra computation and fail more gracefully
try:
    for usdFile in openedFiles:
        UsdUtils.StitchLayers(outLayer, usdFile)
        outLayer.Save()
# if something in the authoring fails, remove the output file
except Exception as e:
    print('Failed to complete stitching, removing output file %s' % results.out)
    print(e)
    os.remove(results.out) 
...

More about layer stitching/flattening/copying in our layer section.

Value Clips (Loading time samples from multiple files)

Note

We only cover value clips in a rough overview here, we might extend this a bit more in the future, if there is interest. We recommend checking out the official docs page as it is well written and worth the read!

USD value clips are USD's mechanism of loading in data per frame from different files. It has a special rule set, that we'll go over now below.

Important

Composition wise, value clips (or the layer that specifies the value clip metadata) is right under the local arc strength and over inherit arcs.

Here are some examples from USD's official docs:

def "Prim" (
    clips = {
        dictionary clip_set_1 = {
            double2[] active = [(101, 0), (102, 1), (103, 2)] 
            asset[] assetPaths = [@./clip1.usda@, @./clip2.usda@, @./clip3.usda@]
            asset manifestAssetPath = @./clipset1.manifest.usda@
            string primPath = "/ClipSet1"
            double2[] times = [(101, 101), (102, 102), (103, 103)]
        }
    }
    clipSets = ["clip_set_1"]
)
{
}

There is also the possibility to encode the value clip metadata via a file wild card syntax (The metadata keys start with template). We recommend sticking to the the above format as it is more flexible and more explicit.

Click here to expand contents

def "Prim" (
    clips = {
        dictionary clip_set_2 = {
            string templateAssetPath = "clipset2.#.usd"
            double templateStartTime = 101
            double templateEndTime = 103
            double templateStride = 1
            asset manifestAssetPath = @./clipset2.manifest.usda@
            string primPath = "/ClipSet2"
        }
    }
    clipSets = ["clip_set_2"]
)
{
}

As you can see it is pretty straight forward to implement clips with a few key metadata entries:

  • primPath: Will substitute the current prim path with this path, when looking in the clipped files. This is similar to how you can specify a path when creating references (when not using the defaultPrim metadata set in the layer metadata).
  • manifestAssetPath: A asset path to a file containing a hierarchy of attributes that have time samples without any default or time sample data.
  • assetPaths: A list of asset paths that should be used for the clip.
  • active: A list of (<stage time>, <asset path list index>) pairs, that specify on what frame what clip is active.
  • times: A list of (<stage time>, <asset path time>) pairs, that map how the current time should be mapped into the time that should be looked up in the active asset path file.
  • interpolateMissingClipValues (Optional): Boolean that activates interpolation of time samples from surrounding clip files, should the active file not have any data on the currently requested time.

Warning

The content of individual clip files must be the raw data, in other words anything that is loaded in via composition arcs is ignored.

The other files that are needed to make clips work are:

  • The manifest file: A file containing a hierarchy of attributes that have time samples without any default or time sample data.
  • The topology file: A file containing all the attributes that only have static default data.

Here is how you can generate them: USD ships with a usdstitchclips commandline tool that auto-converts multiple clip (per frame) files to a value clipped main file for you. This works great if you only have a single root prim you want to load clipped data on.

Unfortunately this is often not the case in production, so this is where the value clip API comes into play. The usdstitchclips tool is a small Python wrapper around that API, so you can also check out the Python code there.

Here are the basics, the main modules we will be using are pxr.Usd.ClipsAPI and pxr.UsdUtils:

Tip

Technically you can remove all default attributes from the per frame files after running the topology layer generation. This can save a lot of disk space,but you can't partially re-render specific frames of the cache though. So only do this if you know the cache is "done".

from pxr import Sdf, UsdUtils

clip_time_code_start = 1001
clip_time_code_end = 1003
clip_set_name = "cacheClip"
clip_prim_path = "/prim"
clip_interpolate_missing = False
time_sample_files = ["/cache/value_clips/time_sample.1001.usd",
                     "/cache/value_clips/time_sample.1002.usd",
                     "/cache/value_clips/time_sample.1003.usd"]
topology_file_path = "/cache/value_clips/topology.usd"
manifest_file_path = "/cache/value_clips/manifest.usd"
cache_file_path = "/cache/cache.usd"

# We can also use:
# topology_file_path = UsdUtils.GenerateClipTopologyName(cache_file_path)
# Returns: "/cache/cache.topology.usd"
# manifest_file_path = UsdUtils.GenerateClipManifestName(cache_file_path)
# Returns: "/cache/cache.manifest.usd"

topology_layer = Sdf.Layer.CreateNew(topology_file_path)
manifest_layer = Sdf.Layer.CreateNew(manifest_file_path)
cache_layer = Sdf.Layer.CreateNew(cache_file_path)

UsdUtils.StitchClipsTopology(topology_layer, time_sample_files)
UsdUtils.StitchClipsManifest(manifest_layer, topology_layer, 
                             time_sample_files, clip_prim_path)

UsdUtils.StitchClips(cache_layer,
                     time_sample_files,
                     clip_prim_path, 
                     clip_time_code_start,
                     clip_time_code_end,
                     clip_interpolate_missing,
                     clip_set_name)
cache_layer.Save()

# Result in "/cache/cache.usd"
"""
(
    framesPerSecond = 24
    metersPerUnit = 1
    subLayers = [
        @./value_clips/topology.usd@
    ]
    timeCodesPerSecond = 24
)

def "prim" (
    clips = {
        dictionary cacheClip = {
            double2[] active = [(1001, 0), (1002, 1), (1003, 2)] 
            asset[] assetPaths = [@./value_clips/time_sample.1001.usd@, @./value_clips/time_sample.1002.usd@, @./value_clips/time_sample.1003.usd@]
            asset manifestAssetPath = @./value_clips/manifest.usd@
            string primPath = "/prim"
            double2[] times = [(1001, 1001), (1002, 1002), (1003, 1003)]
        }
    }
    clipSets = ["cacheClip"]
)
{
}
"""

## API Overview
UsdUtils
# Generate topology and manifest files based USD preferred naming convention.
UsdUtils.GenerateClipTopologyName("/cache_file.usd") # Returns: "/cache_file.topology.usd"
UsdUtils.GenerateClipManifestName("/cache_file.usd") # Returns: "/cache_file.manifest.usd"
# Open layers
topology_layer = Sdf.Layer.CreateNew(topology_file_path)
manifest_layer = Sdf.Layer.CreateNew(manifest_file_path)
cache_layer = Sdf.Layer.CreateNew(cache_file_path)
## Create topology and manifest. This is the heavy part of creating value clips
## as it has to open all layers.
# Generate topology layer, this opens all the time sample layers and copies all
# attributes that don't have time samples and relationships into the topology_layer.
UsdUtils.StitchClipsTopology(topology_layer, time_sample_files)
# Generate manifest layer, this opens all the time sample layers and creates a 
# hierarchy without values of all attributes that have time samples. This is the inverse
# of the topology layer except it doesn't create values. The hierarchy is then used to
# determine what a clip should load as animation. 
UsdUtils.StitchClipsManifest(manifest_layer, topology_layer, 
                             time_sample_files, clip_prim_path)
# Generate cache layer, this creates the metadata that links to the above created files.
UsdUtils.StitchClips(cache_layer,
                     time_sample_files,
                     clip_prim_path, 
                     clip_time_code_start,
                     clip_time_code_end,
                     clip_interpolate_missing,
                     clip_set_name)

Since in production (see the next section), we usually want to put the metadata at the asset roots, we'll usually only want to run

  • UsdUtils.StitchClipsTopology(topology_layer, time_sample_files)
  • UsdUtils.StitchClipsManifest(manifest_layer, topology_layer, time_sample_files, clip_prim_path)

And then create the clip metadata in the cache_layer ourselves:

from pxr import Sdf, Usd, UsdUtils

time_sample_files = ["/cache/value_clips/time_sample.1001.usd",
                     "/cache/value_clips/time_sample.1002.usd",
                     "/cache/value_clips/time_sample.1003.usd"]
time_sample_asset_paths = Sdf.AssetPathArray(time_sample_files)
topology_file_path = "/cache/value_clips/topology.usd"
manifest_file_path = "/cache/value_clips/manifest.usd"
cache_file_path = "/cache/cache.usd"

topology_layer = Sdf.Layer.CreateNew(topology_file_path)
manifest_layer = Sdf.Layer.CreateNew(manifest_file_path)
cache_layer = Sdf.Layer.CreateNew(cache_file_path)

UsdUtils.StitchClipsTopology(topology_layer, time_sample_files)
UsdUtils.StitchClipsManifest(manifest_layer, topology_layer, 
                             time_sample_files, clip_prim_path)

clip_set_name = "cacheClip"
clip_prim_path = "/prim"
clip_interpolate_missing = False

# For simplicity in this example we already know where the asset roots are.
# If you need to check where they are, you can traverse the topology layer,
# as it contains the full hierarchy of the per frame files.
prim = stage.DefinePrim("/valueClippedPrim", "Xform")
# The clips API is a small wrapper around setting metadata fields. 
clips_API = Usd.ClipsAPI(prim)
# Most function signatures work via the following args:
# clips_API.<method>(<methodArg>, <clipSetName>)
# We'll only be looking at non-template value clips related methods here.
## We have Get<MethodName>/Set<MethodName> for all metadata keys:
# clips_API.Get/SetClipPrimPath 
# clips_API.Get/SetClipAssetPaths
# clips_API.Get/SetClipManifestAssetPath
# clips_API.Get/SetClipActive
# clips_API.Get/SetClipTimes 
# clips_API.Get/SetInterpolateMissingClipValues
## To get/set the whole clips metadata dict, we can run:
# clips_API.Get/SetClips()
## To get/set what clips are active:
# clips_API.Get/SetClipSets

## Convenience methods for generating a manifest based on the
# clips set by clips_API.SetClipAssetPaths
# clips_API.GenerateClipManifest
## Or from a user specified list. This is similar to UsdUtils.StitchClipsManifest()
# clips_API.GenerateClipManifestFromLayers

## Get the resolved asset paths in 'assetPaths' metadata.
# clips_API.ComputeClipAssetPaths

prim = stage.DefinePrim("/valueClippedPrim", "Xform")
clips_API = Usd.ClipsAPI(prim)
clips_API.SetClipPrimPath(clip_prim_path, clip_set_name)
clips_API.SetClipAssetPaths(time_sample_asset_paths, clip_set_name)
clips_API.SetClipActive([(1001, 0), (1002, 1), (1003, 2)], clip_set_name)
clips_API.SetClipTimes([(1001, 1001), (1002, 1001), (1003, 1001)], clip_set_name)
clips_API.SetInterpolateMissingClipValues(clip_interpolate_missing, clip_set_name)
# We can also print all clip metadata
print(clips_API.GetClips())
# Enable the clip
clip_sets_active = Sdf.StringListOp.CreateExplicit([clip_set_name])
clips_API.SetClipSets(clip_sets_active)
#Returns:
"""
{'cacheClip': 
    {
        'primPath': '/prim',
        'interpolateMissingClipValues': False, 
        'active': Vt.Vec2dArray(3, (Gf.Vec2d(1001.0, 0.0), Gf.Vec2d(1002.0, 1.0), Gf.Vec2d(1003.0, 2.0))),
        'assetPaths': Sdf.AssetPathArray(3, (Sdf.AssetPath('/cache/value_clips/time_sample.1001.usd'),
                                            Sdf.AssetPath('/cache/value_clips/time_sample.1002.usd'),
                                            Sdf.AssetPath('/cache/value_clips/time_sample.1003.usd'))),
        'times': Vt.Vec2dArray(3, (Gf.Vec2d(1001.0, 1001.0), Gf.Vec2d(1002.0, 1001.0), Gf.Vec2d(1003.0, 1001.0)))
    }
}
"""

How will I use it in production?

Value clips are the go-to mechanism when loading heavy data, especially for animation and fx. They are also the only USD mechanism for looping data.

As discussed in more detail in our composition section, caches are usually attached to asset root prim. As you can see above, the metadata must always specify a single root prim to load the clip on, this is usually your asset root prim. You could also load it on a prim higher up in the hierarchy, this makes your scene incredibly hard to debug though and is not recommended.

This means that if you are writing an fx cache with a hierarchy that has multiple asset roots, you'll be attaching the metadata to each individual asset root. This way you can have a single value clipped cache, that is loaded in multiple parts of you scene. You can then payload/reference in this main file, with the value clip metadata per asset root prim, per asset root prim, which allows you to partially load/unload your hierarchy as usual.

Your file structure will look as follows:

  • Per frame(s) files with time sample data:
    • /cache/value_clips/time_sample.1001.usd
    • /cache/value_clips/time_sample.1002.usd
    • /cache/value_clips/time_sample.1003.usd
  • Manifest file (A lightweight Usd file with attributes without values that specifies that these are attributes are with animation in clip files):
    • /cache/value_clips/manifest.usd
  • Topology file (A USD file that has all attributes with default values):
    • /cache/value_clips/topology.usd
  • Value clipped file (It sublayers the topology.usd file and writes the value clip metadata (per asset root prim)):
    • /cache/cache.usd

Typically your shot or asset layer USD files will then payload or reference in the individual asset root prims from the cache.usd file.

Important

Since we attach the value clips to asset root prims, our value clipped caches can't have values above asset root prims.

If this all sounds a bit confusing don't worry about it for now, we have a hands-on example in our composition section.

Value Clips and Instanceable Prims

For more info on how value clips affect instancing, check out our composition section. There you will also find an example with multiple asset roots re-using the same value clipped cache.

How does it affect attribute time samples and queries?

When working with time samples in value clips there are two important things to keep in mind:

Subframes

The active and times metadata entries need to have sub-frames encoded. Let's look at our example:

Three per frame files, with each file having samples around the centered frame:

  • /cache/value_clips/time_sample.1001.usd": (1000.75, 1001, 1001.25)
  • /cache/value_clips/time_sample.1002.usd": (1001.75, 1002, 1002.25)
  • /cache/value_clips/time_sample.1003.usd": (1002.75, 1003, 1003.25)

They must be written as follows in order for subframe time sample to be read.

double2[] active = [(1000.5, 0), (1001.75, 1), (1002.75, 2)] 
double2[] times = [(1000.5, 1000.5), (1001.75, 1001.75), (1002.75, 1003)]

As you may have noticed, we don't need to specify the centred or .25 frame, these will be interpolated linearly to the next entry in the list.

Queries

When we call attribute.GetTimeSamples(), we will get the interval that is specified with the times metadata. For the example above this would return:

(1000.75, 1001, 1001.25, 1001.75, 1002, 1002.25, 1002.75, 1003, 1003.25)

If we would only write the metadata on the main frames:

double2[] active = [(1001, 0), (1002, 1), (1003, 2)] 
double2[] times = [(1001, 1001), (1002, 1002), (1003, 1003)]

It will return:

(1001, 1001.25, 1002, 1002.25, 1003, 1003.25)

Important

With value clips it can be very expensive to call attribute.GetTimesamples() as this will open all layers to get the samples in the interval that is specified in the metadata. It does not only read the value clip metadata. If possible use attribute.GetTimeSamplesInInterval() as this only opens the layers in the interested interval range.