STP
Product Line
Hyssos Tech is building Next-Gen 'multimodal' user interfaces for C2, Simulation, Training or complex equipment and unmanned/autonomous systems. Our micro services architecture leverages Natural Language Processing and touch for fast and easy input of commands for course of action creation and building rapid training scenarios. Additional AI capabilities help guide planning, for instance by providing automated simulation feedback. STPs microservices engine can be embedded into handheld, desktop or cloud applications.
1
STP Engine
STP's JavaScript and .NET SDK provides the necessary API to our micro services platform to access robust AI capabilities (Natural Language Processing, Computer Vision sketch recognizer, Computational Linguistics task recognition as well as capabilities to enable sync and task matrices, Task Org management, export to C2 and Simulators, role based editing (S2, S3, S4, FSO, Engineering), OPORD products, and more). Plan Canvas is a sample of the NLP and editing capabilities. The JavaScript SDK can be found at our GitHub repository: (capabilities are constantly evolving on this site)
2
STP for ArcGIS Pro
An Advanced Natural Language Planning Workstation
STP for ArcGIS Pro is a doctrine-based, natural input paradigm and technology, bypassing complex UIs by fusing speech and sketch. It is a Next-Gen user interface for Pro, leveraging natural language and sketch for fast and easy input, and other AI capabilities to help guide planning for warfighters, today, and public safety teams, in the future.
3
STP Neuro-Symbolic Copilot
STP stays at the forefront of AI and LLM advancement by cutting edge R&D in a capability that we are calling Neuro-Symbolic Copilot. This innovation anchors the LLM with doctrinal NLP, enabling flexible input of military data, and minimizing the LLM's hallucinations, resulting in superior Q&A performance.
4
STP-HUM/T
We are exploring how autonomous vehicles can work closely with humans in the field, by communicating queries and commands multi-modally, i.e., using speech and sketches; in turn receiving information, such as the vehicle’s goal (e.g. the route it proposes to take) graphically on a hand-held tablet.