STLCG++: A Masking Approach for Differentiable Signal Temporal Logic Specification

1Carnegie Mellon University, 2Scaled Foundations, 3University of Washington
4NVIDIA

Efficient path replanning for satisfying STL requirements using STLCG++ in GRID. You can specify any arbitrary timed requirements using STL and the nominal path will be modified to satisfy it, aiding data generation for training VLAs over complex long-horizon tasks. Try out our hands on demo using jax here!

Abstract

Signal Temporal Logic (STL) offers a concise yet expressive framework for specifying and reasoning about spatio-temporal behaviors of robotic systems. Attractively, STL admits the notion of robustness—the degree to which an input signal satisfies or violates an STL specification—thus providing a nuanced evaluation of system performance. Notably, the differentiability of STL robustness enables direct integration into robotics workflows that rely on gradient-based optimization, such as trajectory optimization and deep learning. However, existing approaches to evaluating and differentiating STL robustness rely on recurrent computations, which become inefficient with longer sequences, limiting their use in time-sensitive applications. In this paper, we present STLCG++, a masking-based approach that parallelizes STL robustness evaluation and backpropagation across timesteps, achieving more than 1000× faster computation time than the recurrent approach (STLCG). We also introduce a smoothing technique for differentiability through time interval bounds, expanding STL’s applicability in gradient-based optimization tasks over spatial and temporal variables. Finally, we demonstrate STLCG++’s benefits through three robotics use cases and provide open-source Python libraries in JAX and PyTorch for seamless integration into modern robotics workflows.

STLCG++: Advancing Neuro-Symbolic Reasoning in Robotics with Differentiable Signal Temporal Logic

In modern robotics and deep learning, safe behavior generation is essential for deployment. Signal Temporal Logic (STL) is a powerful tool for defining and reasoning about the spatio-temporal behavior of dynamic systems, particularly in robotics. It enables precise specification of requirements such as:


“A drone must enter a designated region within 10 seconds and remain there for at least 5 seconds.”


More importantly, STL provides quantitative robustness metrics, allowing optimization-based methods to evaluate how well a system adheres to given specifications. As such, we have seen a growing interest in the inclusion of STL objectives and constraints in various optimization-based robotics problems such as: trajectory optimization, reinforcement learning, and control synthesis. However, existing approaches to STL evaluation and differentiation often suffer from computational inefficiencies. Many rely on recurrent operations, which scale poorly for long time horizons and limit real-time applications. This bottleneck prevents widespread adoption in machine learning and robotics pipelines that require fast and scalable optimization.


Enter STLCG++, a new masking-based approach that eliminates the inefficiencies of recurrent processing. Inspired by attention mechanisms in transformer models, STLCG++ replaces sequential evaluations with a fully parallelizable operation, dramatically improving computation speed and scalability.


Our approach achieves:

STLCG++ vs. STLCG: Transitioning from sequential recurrent processing to parallelized masking operations.
STLCG++ vs. STLCG: Transitioning from sequential recurrent processing to parallelized masking operations.

The Masking Approach: Efficiency and Accuracy Combined

The original STLCG had sequential (and computationally heavy) recurrent computations that limited its adoption. We overcome this sequential dependency in STLCG++ through an innovative masking strategy, inspired by the self-attention mechanisms in transformer architectures. By converting a one-dimensional signal into a structured multi-dimensional array and applying carefully crafted masks, STLCG++ processes long sequences all at once rather than sequentially. This not only accelerates computation but also preserves the fidelity of gradient information—a critical factor for robust optimization.

Dramatic Performance Gains on Modern Hardware

STLCG++ achieves significant computational improvements over its recurrent predecessors. In benchmark tests, the masking approach scales gracefully with increasing sequence lengths and shows particularly impressive speed gains on GPU architectures—with some tests demonstrating over 1000× faster evaluations when optimizing across multiple time intervals.

Computation Time Comparison: STLCG++ (masking approach) vs. traditional recurrent methods on CPU and GPU.
Computation Time Comparison: STLCG++ (masking approach) vs. traditional recurrent methods on CPU and GPU.

Differentiable Time Interval Bounds for Enhanced Optimization

A standout feature of STLCG++ is its ability to smoothly approximate time interval bounds through a differentiable mask. This allows optimization not only over control inputs but also over temporal parameters—opening new avenues for specification mining and data-driven model refinement. The smooth mask leverages a sigmoid-based function to provide a gradual transition, which can be finely tuned via a smoothing parameter.

Smooth Mask Visualization: Demonstrating the effect of varying the smoothing parameter on time interval selection.
STL Parameter Mining: Visualizing the optimization landscape for learning time interval bounds..

Real-World Applications: From Trajectory Planning to Deep Generative Modeling

We apply STLCG++ for multiple real-world applications:

Trajectory Planning with STL: Optimized trajectories navigating target regions efficiently.
Trajectory Planning with STL: Optimized trajectories navigating target regions efficiently.

Open-Source and Ready for Integration

To ensure that the benefits of STLCG++ are accessible to the community, we have released open-source Python libraries for both JAX and PyTorch. These libraries integrate seamlessly into modern robotics and deep learning workflows, providing researchers and practitioners with powerful tools to accelerate their projects.

BibTeX

@misc{kapoor2025stlcgmaskingapproachdifferentiable,
    title={STLCG++: A Masking Approach for Differentiable Signal Temporal Logic Specification}, 
    author={Parv Kapoor and Kazuki Mizuta and Eunsuk Kang and Karen Leung},
    year={2025},
    eprint={2501.04194},
    archivePrefix={arXiv},
    primaryClass={cs.RO},
    url={https://arxiv.org/abs/2501.04194}, 
}