Skip to main content

Scale AI Inference Without Vendor Lock-In

ABOUT US

Toraks delivers production-ready AI infrastructure focused on scalable inference deployment.

 

Our MACx accelerator is engineered for data center environments requiring performance, efficiency, and architectural flexibility.

 

We prioritize predictable performance, lower operating costs, and independence from proprietary ecosystems.

MACx OVERVIEW

MACx is a high-performance AI inference accelerator designed for production data center environments.

Performance: 340 TFLOPS (FP16/BF16) · 680 TOPS (INT8) · 42 TFLOPS (FP32)

Memory:  64GB HBM · 1.8 TB/s

Interface & Power: PCIe 5.0 x16 · 400W TDP


Software Compatibility

PyTorch | TensorFlow | ONNX | Kubernetes | Linux | C++ | Python

 

Technical Documentation

Brochure (PDF) · Whitepaper (PDF) · Research Abstract (PDF)

Contact

We support production AI deployments across data centers and enterprise environments.

 

For technical discussions, pricing, or partnership opportunities, contact our team directly

 

 

Sales & Partnerships

 

General Inquiries

 

 

© 2026 Toraks d.o.o. All rights reserved.