top of page

STP

Product Line

Hyssos Tech is building Next-Gen 'multimodal' user interfaces for C2, Simulation, Training or complex equipment and unmanned/autonomous systems. Our micro services architecture leverages Natural Language Processing and touch for fast and easy input of commands for course of action creation and building rapid training scenarios. Additional AI capabilities help guide planning, for instance by providing automated simulation feedback. STPs microservices engine can be embedded into handheld, desktop or cloud applications.

STP Cloud Engine

JavaScript SDK provides the necessary API to our micro services platform to access  robust AI capabilities (Natural Language Processing, Computer Vision sketch recognizer, Computational Linguistics task recognition as well as capabilities to enable sync and task matrices, Task Org management, export to C2 and Simulators, role based editing (S2, S3, S4, FSO, Engineering), OPORD products, and more).  Plan Canvas is a sample of the NLP and editing capabilities. The JavaScript SDK can be found at our GitHub repository: (capabilities are constantly evolving on this site)

https://github.com/hyssostech/sketch-thru-plan-sdk-resources

2

STP Desktop/Embedded Engine

.NET SDK provides the necessary API to our micro services platform to access  robust AI capabilities (Natural Language Processing, Computer Vision sketch recognizer, Computational Linguistics task recognition as well as capabilities to enable sync and task matrices, Task Org management, export to C2 and Simulators, role based editing (S2, S3, S4, FSO, Engineering), OPORD products, and more).  With the .NET SDK you can build your own custom desktop or mobile/handheld application with STP capabilities. The .NET SDK is available upon request.

3

STP for ArcGIS Pro

An Advanced Natural Language Planning Workstation

STP for ArcGIS Pro is a doctrine-based, natural input paradigm and technology, bypassing complex UIs by fusing speech and sketch. It is a Next-Gen user interface for Pro, leveraging natural language and sketch for fast and easy input, and other AI capabilities to help guide planning for warfighters, today, and public safety teams, in the future.

4

STP-HUM/T

We are exploring how autonomous ground vehicles can work closely with humans in the field, by communicating queries and commands multi-modally, i.e., using speech and sketches; in turn receiving information, such as the vehicle’s goal (e.g. the route it proposes to take) graphically on a hand-held tablet. 

bottom of page