The store will not work correctly when cookies are disabled.
AppleCare+
By proceeding you are agreeing to be bound by the Terms and Conditions of Sale and the AppleCare+ Terms & Conditions. These are standard terms on which we intend to rely. For your own benefit and protection it is important that you read the Terms and Conditions of Sale, the AppleCare+ Terms and Conditions, the Key Facts Document and the Initial Disclosure Document carefully before you do so. If you do not understand anything, or have any questions, please call the Apple Contact Centre www.apple.com/uk/support/.
I am resident in the United Kingdom, but I do not reside on the Isle of Man or the Channel Islands.
SCALABLE AI DEVELOPMENT AI Power, Perfectly Packaged
The Lenovo ThinkStation PGX small form factor, powered by the NVIDIA GB10 Grace Blackwell Superchip, enables simplified AI workflows right at your desk. It comes preloaded with the NVIDIA AI software stack and seamlessly integrates with your workstation for on-device AI development, eliminating the complexity and cost of cloud-based solutions.
AI EXCELLENCE Big Models, Small Chip, Limitless Potential
The NVIDIA GB10 Grace Blackwell Superchip combines cutting-edge graphics with a powerful multicore ARM CPU to deliver an energy efficient system-on-a-chip (SOC) solution. It’s built to handle complex AI tasks and massive datasets with incredible speed and efficiency, making it perfect for tackling demanding workloads locally.
FULL-STACK NVIDIA AI PLATFORM Ready-to Go AI Development Environment
A compact entry point to advanced AI, the ThinkStation PGX is preloaded with the NVIDIA DGX™ OS and NVIDIA AI software stack, providing an optimized space for AI model development. With tools like PyTorch and Jupyter® Notebooks, engineers can easily prototype, fine-tune, and inference models for smooth deployment to data center or cloud platforms.
SANDBOXED DEVELOPMENT ENVIRONMENT Your AI Playground: Experiment. Build. Succeed.
The ThinkStation PGX workstation offers developers a secure sandbox for prototyping AI models, freeing up cluster resources for production workloads. This streamlines workflows, keeps IP on-premises, and reduces security risks — ideal for experimentation without compromising enterprise standards.
Specifications may vary depending upon region / model.