> section: TECH_STACK

How the stack behaves

We integrate three layers so the robot can stand among people in a space and operate with clear rules.

01

Motion layer: maps and paths.

The robot builds and updates a map of the environment, plans paths, avoids obstacles and follows local space rules. We set speed, safety distances and safety zones based on each location.

02

Perception and voice: seeing and listening.

Microphones, cameras and sensors feed the perception layer. The system filters noise, tracks nearby people, listens to speech and detects when a person expects a response.

03

Behavior layer: rules and escalation.

Behavior logic determines action in every situation: respond, ask a clarifying question, hand off to a team member, move out of the way. Each behavior pattern is tied to clear rules, logs and operational boundaries.

Technical trust

We use standard, verifiable components and document every deployment. Clients have visibility into what runs where and who manages each part of the system. Documentation includes basic architecture diagram, space settings description, list of integrations, and support regime description.

> arch: LAYERED_STACK
> mode: PRODUCTION
> status: ONLINE
BEHAVIOR

Rules & Escalation

Decision LogicHuman EscalationSafety Rules
PERCEPTION

Sensing & Voice

Computer VisionSpeech RecognitionEnvironment Sensing
MOTION

Navigation & Paths

SLAM MappingPath PlanningObstacle Avoidance
HARDWAREBASE

Robot Platform

SensorsActuatorsCompute Unit
data_flow: BIDIRECTIONAL