ABOUT US
Toraks delivers production-ready AI infrastructure focused on scalable inference deployment.
Our MACx accelerator is engineered for data center environments requiring performance, efficiency, and architectural flexibility.
We prioritize predictable performance, lower operating costs, and independence from proprietary ecosystems.
WHO IS MACx BUILT FOR
MACx is designed for organizations deploying AI inference at scale within production data center environments.
It integrates seamlessly into PCIe-based server infrastructure and supports enterprise-grade deployment models.
Ideal for:
• Data center operators
• AI infrastructure providers
• Cloud inference platforms
• Enterprise AI teams
• Research institutions
• Government and public sector infrastructure projects
OUR MISSION
To deliver scalable AI acceleration without vendor dependency.
We believe AI infrastructure should remain open, efficient, and economically sustainable.
MACx is built to reduce operational cost while preserving architectural flexibility and long-term deployment freedom.
MACx OVERVIEW
MACx is a high-performance AI inference accelerator designed for production data center environments.
Performance: 340 TFLOPS (FP16/BF16) · 680 TOPS (INT8) · 42 TFLOPS (FP32)
Memory: 64GB HBM · 1.8 TB/s
Interface & Power: PCIe 5.0 x16 · 400W TDP
Software Compatibility
PyTorch | TensorFlow | ONNX | Kubernetes | Linux | C++ | Python
Testing Program:
1 node (4 GPUs) – €100 / 24h · 1 node (8 GPUs) – €200 / 24h
Up to 10 nodes · Cost deducted from final order
Technical Documentation