Fourteen years ago, if you wanted to program an FPGA, you downloaded a proprietary toolchain from Xilinx or Altera, signed a license agreement you didn't read, and prayed the 8-gigabyte installer wouldn't corrupt your system partition. The synthesis tools were free-as-in-beer but closed-as-in-vault. You couldn't inspect them, modify them, or understand why your design failed timing by 0.3 nanoseconds at 2 AM.
This week, a solo developer published a blog post titled "My DIY FPGA Board Can Run Quake II." It hit 327 points on Hacker News. And the thread that followed is, quietly, one of the most important conversations about hardware democratization you'll read this year.
But to understand why a hobbyist running a 1997 shooter on custom silicon matters, you need the timeline.
2013, 2015: Project IceStorm Cracks the Bitstream Wide Open
The story starts with Clifford Wolf, an Austrian engineer who decided to reverse-engineer the bitstream format of Lattice Semiconductor's iCE40 FPGAs. The project, called IceStorm, was audacious in a way that's hard to overstate. FPGA bitstream formats are proprietary, undocumented, and intentionally obscure. Figuring out which bits configure which logic elements is like reconstructing a city's electrical grid by flipping breakers and watching which lights go out.
Wolf succeeded. By 2015, IceStorm could take Verilog, synthesize it through an open-source tool called Yosys (also Wolf's creation), place and route it with Arachne-pnr, and produce a working bitstream for iCE40 chips. No vendor tools required. No license agreements. No 8-gigabyte installers.
The iCE40 was tiny, a few thousand logic cells. Enough for blinking LEDs and simple serial interfaces, not much more. But the principle was proven: you could go from HDL to hardware without touching a single proprietary binary.
2018: The ECP5 and the Tool That Changed Everything
The next inflection point came when the open-source community turned its attention to Lattice's ECP5 family, chips with 25,000 to 85,000 logic cells. David Shah led much of the reverse-engineering effort through Project Trellis, documenting the ECP5's bitstream format with enough fidelity to build a complete open-source flow.
Around the same time, nextpnr replaced Arachne-pnr as the place-and-route engine. Faster, more capable, architecturally cleaner. The open toolchain was no longer a proof of concept. It was becoming infrastructure.
This matters because place-and-route is the hard part of FPGA development. Synthesis, turning your Verilog into a netlist of gates, is well-understood compiler theory. But mapping that netlist onto physical resources, routing signals through a maze of configurable interconnects while meeting timing constraints: that's where the billion-dollar EDA companies earn their money. An open-source tool doing this competently was the moment the landscape shifted.
2020, 2024: Quiet Years, Critical Mass
The years between the ECP5 breakthrough and today saw steady, unglamorous progress. LiteX, a framework for building SoC (System-on-Chip) designs in Python, lowered the barrier from "must know Verilog intimately" to "must be willing to learn a Python API." Amaranth HDL (formerly nMigen) offered a more modern hardware description approach. SpinalHDL brought Scala into the mix.
CPU cores like VexRiscv, a RISC-V implementation written in SpinalHDL, gave these open toolchains something meaningful to synthesize. You could now build a complete, working computer on an FPGA using nothing but open-source tools. Linux boots on these designs. Not fast, but it boots.
The community also started targeting bigger, more capable FPGAs. The Gowin family. Some early work on Xilinx 7-series through Project X-Ray. Each new target expanded what was possible without vendor lock-in.
2026: A Homemade GPU Runs Quake II
Which brings us to this week's post, published on the developer's personal blog at pul.se. The project: a custom-designed FPGA board, not a commercial dev kit, running a soft GPU implementation capable of rendering Quake II at playable framerates.
Let me be precise about what this involves, because the Hacker News thread is full of people who don't quite grasp the layers.
This isn't running Quake II on a PC with an FPGA as an accelerator. The developer designed a circuit board, wrote (or adapted) a GPU architecture in HDL, synthesized it with open tools, and ran the game on the resulting hardware. The FPGA *is* the computer. The GPU is not a chip you buy, it's a design you describe in code and compile into configurable logic.
Imagine writing a novel, but instead of publishing it as a book, you build a printing press from scratch, design your own typeface, manufacture your own ink, and then print the novel. That's roughly the ratio of effort here.
Three Camps, One Thread, and What They Tell Us
The 327-point discussion splits into three camps, and the distribution tells you everything about where open silicon stands today.
Camp One: The Impressed Practitioners. These are people who've used FPGAs professionally and understand what the open toolchain's maturity implies. Their comments focus on timing closure, resource utilization, the specific GPU architecture choices. They know that getting Quake II running means solving memory bandwidth problems, implementing a rasterization pipeline, handling texture mapping, on a device where every multiplier and block RAM is precious.
Camp Two: The Curious Outsiders. Software developers who've never touched an FPGA, asking genuinely good questions. "How is this different from an emulator?" "Could you tape this out as an ASIC?" "What does 'synthesis' mean in this context?" The quality of these questions has improved dramatically over the past five years, which itself is evidence that open toolchains are drawing new people into hardware design.
Camp Three: The Skeptics. A smaller but vocal group arguing this is a parlor trick. Quake II is a 27-year-old game. Any modern microcontroller could run it. Why does running it on an FPGA matter?
The skeptics miss the point in an instructive way. The achievement isn't "can play Quake II." The achievement is "a single person, using freely available tools, designed and fabricated custom computing hardware capable of real-time 3D rendering." The game is the benchmark, not the goal.
The Seven-Figure Line Just Moved
Here's the number that puts this in perspective. A decade ago, the minimum viable budget for custom chip design, from RTL to working silicon, was roughly $1 million for a simple design on an older process node. The tools alone (Synopsys, Cadence, Mentor) could run $100,000 per seat per year.
Open FPGA toolchains don't get you to custom silicon. FPGAs are reconfigurable, not custom-manufactured. But they get you to custom *hardware*, which for many applications is the thing that actually matters. And they get you there for the cost of the FPGA chip and a PCB fabrication run, maybe $200 in parts.
The progression is clear. 2015: blink an LED. 2018: run a RISC-V CPU. 2022: boot Linux. 2026: real-time 3D rendering. Each step looked like a novelty at the time. In sequence, they trace a capability curve that should make every developer who's ever flinched at hardware costs pay attention.
Where the Curve Points Next
Two trends are converging. First, the open toolchains are getting faster and targeting larger devices. Second, FPGA manufacturers are beginning to treat open-source support as a competitive advantage rather than a threat. Lattice's relatively open documentation posture, compared to Xilinx and Intel's historically locked-down approach, has earned them outsized community loyalty.
The endgame isn't everyone designing their own GPUs. It's that the *option* exists. When custom hardware is accessible to individual engineers, the design space explodes. Niche applications that could never justify a $1M chip program, medical devices for rare conditions, scientific instruments for obscure measurements, accessibility hardware for specific disabilities, suddenly become feasible.
One developer running Quake II on a homemade board is a demo. The toolchain that made it possible is the revolution.
And if you're in Camp Three, still convinced this is just a parlor trick, consider: every technology that eventually matters looks like a toy to someone who isn't paying attention.^[1]
^[1] I'd argue the real tell is the PCB design. Writing HDL is software-adjacent. Designing a board with controlled-impedance traces for high-speed memory interfaces? That's where the hobbyist-to-engineer jump happens. The fact that this developer did both is the part that keeps me up at night, in the good way.