The Return of the UNIX Workstation (Now With AI)

I was a heavy UNIX user throughout the 1990s, back when powerful RISC UNIX workstations roamed the Earth. SGI Octanes, Sun Ultras, DEC Alphastations… all absurdly expensive and built for science, engineering, and development work. These workstations used RISC processors (because RISC was the future!) and cost a small fortune. God, they were powerful. And on many of them, I installed Linux (basically open source UNIX).
But in the 2000s, commodity Intel/AMD x86 PCs had a better performance/cost ratio, and powerful RISC UNIX workstations went extinct. But it was OK… those PCs still ran Linux. And Apple Macs ran macOS (which is UNIX). So all of the concepts and commands never went anywhere. In fact the web and cloud – including the associated development frameworks – grew up around Linux. So when the cloud and Linux suddenly became “a thing” in mainstream media back in the early 2010s, us smug UNIX folk just sat back and smiled.

Fast-forward to 2020: Apple announces they’re moving off Intel x86 processors to a insanely fast ARM processor of their own design. ARM is a RISC processor, and macOS is UNIX. So Apple brought back the powerful RISC UNIX workstation of the 1990s (albeit much cheaper).
Now macOS is a good UNIX, but it’s not as good as Linux. Luckily, the Asahi open source project reverse engineered Apple’s platform and figured out how to run Linux on it natively (after all, Linux supported ARM long before any other hardware vendor). I was one of the first users, and today I run Fedora Asahi Remix Linux natively on a Mac Studio M1 (20 core ARM processor with 128 GB of memory and 4 TB of storage).
This was perfect, since I run a lot of virtualized Linux servers and containers, as well as develop a lot of software to run in them. So a powerful RISC workstation running Linux was a breath of fresh air and dramatically increased my productivity.
However, I now need to do a lot of data science work, and that involves training, tuning, and running various machine learning and AI models.
Luckily, in 2025 NVIDIA released their Grace Blackwell GB10 desktop supercomputer platform: a Linux workstation with a 20-core ARM CPU, 128 GB of memory, 4 TB of storage, and an absurdly powerful NVIDIA GPU capable of up to 1 petaFLOP of AI performance. In other words: a modern RISC UNIX workstation built specifically for AI and data science workloads.

While NVIDIA’s version of the GB10 is called the DGX Spark, I got the Dell version of it back in November (the Dell Pro Max with GB10), which has better cooling. I originally used it as a server that I would remotely connect to, but it has since become my main workstation. After all, it essentially has the same specs as my Mac Studio (20-core ARM processor, 128 GB memory, 4 TB storage) and runs Linux out-of-the-box without having to rely on an open source project that reverse engineered Apple’s hardware. But it also has that powerful NVIDIA GPU for my data science work, and NVIDIA provides some polished tools that make it easier for me to run all the data science stuff I need.
Performance-wise, the ARM CPU is on par with my Mac Studio, and Linux absolutely flies on both systems. So I migrated all of my virtualized Linux servers, containers, Kubernetes clusters, and Ansible files over, and installed my full development stack. Suddenly I had one machine that could do everything: software development, virtualization, containers, devops, data science, AI inference, and local AI agents. Boom. The 90s are back, baby!
But, there’s more! The GB10 is an AI supercomputer and NVIDIA has made running anything AI-related super easy (they have detailed documentation for setting everything up). So, I’m also running a NemoClaw AI agent that helps me with my tasks. NemoClaw basically turns the DGX Spark into a local AI operator that can automate development workflows, inspect files, write code and scripts, summarize documents, run commands, and analyze logs instead of just responding to prompts like a traditional chatbot. That means I can delegate real tasks for it to do, instead of just asking questions.
It leverages the Nemotron 3 Super 120B-parameter AI model, and primarily uses Telegram as the chatbot interface. It isolates the AI agent from the host system, which makes it much safer than just running the underlying OpenClaw directly (I can easily restrict what the AI agent has access to and what it is integrated with). Everything runs beautifully on the system alongside my other workloads. A few years ago, this kind of setup would have required an entire server rack. Now it runs on my desktop. That still feels slightly absurd in the best possible way.
I also leverage the GPT-OSS 120B-parameter model locally on the DGX Spark for AI-assisted development within VS Code. As a senior developer, I treat AI less as “vibe coding” and more as a force multiplier for the repetitive work that slows down real software engineering. It means I can spend more time focusing on architecture, systems design, and problem solving.
What’s fascinating to me is that the industry somehow ended up back where we started: powerful local UNIX workstations running on RISC architectures, used by engineers, scientists, and developers to do computationally intensive work. The difference is that the workload is no longer just statistical analysis, 3D rendering, or scientific visualization… now we’re running AI models locally on our UNIX workstations. Wicked.
Disclaimer: I still use macOS and Windows systems to support the IT, software development, and data science programs at the college since they are taught either on a Windows PC or a Mac.
For those interested, here’s my current Workstation Stack
- Dell Pro Max GB10
- Ubuntu Linux
- NemoClaw
- Nemotron 3 Super
- GPT-OSS:120b
- Ollama
- VS Code and vim (a “vi-able” alternative)
- JupyterLab
- Podman/Docker
- NextCloud
- Ansible (control node with hundreds of playbooks and roles)
- Kubernetes (k3s)
- KVM/QEMU