STL_dropout
In this repository I provided a toolbox that scales the Neurosymbolic algorithm, for neural controller training for temporal tasks to longer time horizon and higher state dimensions. The toolbox is available from here. The main issue for scalability is the problem of vanish/explode gradient when the taks horizon is long or the system dimension is high. We provided a novel gradient approximation technique specifically for policy optimization via Signal Temporal Logics objectives, that combines the idea of stochastic depth and the idea of critical predicates for STL. This provides a scalable algorithm that enables us doing policy optimization to satisfy real-world and complex temporal tasks.