infraPlace
AI Workstation Enterprise – L40S
AI Workstation Enterprise – L40S
Impossible de charger la disponibilité du service de retrait
🖥️ Chassis & Design
-
Model: Supermicro SuperWorkstation 747BTQ-R2K28B (enterprise full-tower)
-
Dimensions: 66 × 21 × 56 cm
-
Weight: ~25 kg
-
Enterprise-grade steel chassis with acoustic optimization
-
Tool-less panels for easy maintenance
-
Hot-swap drive bays for storage flexibility
-
Multi-zone airflow with dust filters
❄️ Cooling System
-
CPU: 360mm enterprise-grade liquid cooler
-
Case: 6× 140mm industrial PWM fans with thermal monitoring
-
GPU: NVIDIA L40S blower cooling with dedicated airflow channel
-
Intelligent fan curves for 24/7 datacenter-grade operation
⚡ Power Supply
-
Redundant 1800W Platinum-rated PSU (hot-swap)
-
Enterprise power monitoring & surge protection
-
Designed for 24/7 uptime
💻 Core Hardware
-
GPU: NVIDIA L40S – 48GB ECC GDDR6
-
CPU: AMD Threadripper Pro / Intel Xeon-W (56–64 cores)
-
RAM: 512GB DDR5 ECC (expandable up to 1TB)
-
Storage: 2× 4TB NVMe Gen4 SSDs (RAID1/0) enterprise grade
-
Motherboard: Supermicro W790 / WRX80 workstation-class board
-
Networking: Dual 25GbE NICs – cluster-ready
🧠 Software & AI Stack (Pre-installed)
-
Ubuntu 22.04 LTS
-
CUDA 12.x + cuDNN
-
Docker + NVIDIA Container Toolkit
-
Kubernetes-ready (K8s + Helm preconfigured)
-
Slurm cluster configuration for HPC environments
-
PyTorch, TensorFlow, JAX preinstalled
📦 In the Box
-
Fully assembled enterprise workstation (burn-in tested 48h)
-
Power cables (EU/US/UK compatible)
-
Quick Start Guide (digital + printed card)
-
USB recovery stick with OS + drivers
-
Full hardware & stress-test certification report
🛡️ Warranty & Service
-
36 months enterprise warranty included (parts & labor)
-
Optional extended coverage up to 60 months
-
AI Enterprise Support Pack: €399/month (CUDA & framework updates, cluster integration, remote monitoring)
-
6 months Enterprise Support Pack included free
📐 Use Cases
-
Enterprise AI inference & deployment
-
Multi-user R&D environments
-
Startups replacing cloud GPU instances with on-premise compute
-
Cluster-ready for HPC and datacenter setups
Share
