This technical article from Mozilla Hacks advances knowledge about WebRender browser rendering optimization. While not explicitly engaged with human rights provisions, the article implicitly champions open access to educational content (Article 26), supports freedom of expression through knowledge-sharing (Article 19), and demonstrates Mozilla's commitment to advancing scientific understanding of web technology (Article 27). The domain's structural support for open access and non-profit mission strengthens positive scores across education, expression, and scientific collaboration dimensions.
I was already extremely pleased with the Firefox Quantum beta, they really are stepping their game up. If this is truly as clean as they say it is, web browsing on cheap computers just got much smoother.
I imagine the render task tree also has to determine which intermediate textures to keep in the texture cache, and which ones will likely need to be redone in the next frame. That kind of optimization has to be tricky.
I really appreciate the time they are taking to describe the changes in an easy to understand way. The sketches and graphics really help explain a pretty complex subject.
Now that this is closer to shipping, I'm curious what impact this would have on battery life. On the one hand, this is lighting up more silicon; on the other hand: a faster race to sleep, perhaps?
Have there been any measurements on what the end result is on a typical modern laptop?
I'd largely forgotten what pixel shaders actually were, so it was nice to get a high level understanding through this article, especially with the drawings!
Humourously enough, when I worked on a team that was writing a graphical web browser for mobile in the late 90's [1], they used a display list for rendering.
The reasoning was somewhat different, web pages were essentially static (we didn't do "DHTML"), if the page rendering process could generate an efficient display list, then the page source could be discarded, and only the display list needed to be held in memory, this rendering could then be pipelined with reading the page over the network, so the entire page was never in memory.
Full Disclosure: while I later wrote significant components of this browser (EcmaScript, WmlScript, SSL, WTLS, JPEG, PNG), the work I'm describing was entirely done by other people!
[1] - I joined in 97, the first public demo was at GSM World Congress Feb 98
While I would consider myself more a Golang fan than a Rust fan, I am impressed by the speed by which the Mozilla team is changing fundamental parts of their browser and somehow I believe rust has something to do with that speed.
> What if we stopped trying to guess what layers we need? What if we removed this boundary between painting and compositing and just went back to painting every pixel on every frame?
This feels a bit like cheating. Not all devices have a GPU. Would Firefox be slow on those devices?
Also, pages can become arbitrarily complicated. This means that an approach where compositing is used can still be faster in certain circumstances.
The name "WebRender" is unfortunate though. Things with a "Web" prefix - "Web Animations", "WebAssembly", "WebVR" - are typically cross-browser standards. This is just a new approach Firefox is using for rendering. It doesn't appear to be part of any standard.
Why are they so obsessed with 60 fps? 120 fps looks considerably better, and there are other effects like smear and judder that significantly decrease even with significantly higher frame rates, say 480 fps [1].
Will there finally be a unified use of the GPU on all platforms (win, mac, linux, etc) or will WebRender just be a Windows only feature for quite some time?
Just tried it with the Nightly by setting gfx.webrender.enabled to true in about:config. Wow, that thing flies. It's seriously amazing. And so far no bugs or visual inconsistencies I could detect. Firefox is really making great progress on this front!
I tried testing it out on a ThinkPad T61 to see how well it works with an older embedded GPU (Intel 965 Express), but I can't enable it (on Windows 10) because D3D11 compositing is disabled, it says D3D11_COMPOSITING: Blocklisted; failure code BLOCKLIST_
So does that mean that it is known not to work with that GPU? Can you override the blocklist to see what happens?
Edit: It also says:
> Direct2D: Blocked for your graphics driver version mismatch between registry and DLL.
and
> CP+[GFX1-]: Mismatched driver versions between the registry 8.15.10.2697 and DLL(s) 8.14.10.2697, reported.
Indeed that is correct, the driver is marked as version 8.15.10.2697 but the fileversion of the dlls are 8.14.10.2697, this seems to be intentional by Microsoft or Intel, note that the build numbers are still the same. Firefox is quite naive if it thinks it can just try to match those.
If only it were as simple as just using Loop-Blinn. :) The technique described there will produce unacceptably bad antialiasing for body text. Loop-Blinn is fine if you want fast rendering with medium quality antialiasing, though. (Incidentally, it's better to just use supersampling or MLAA-style antialiasing with Loop-Blinn and not try to do the fancy shader-based AA described in that article.)
Additionally, the original Loop-Blinn technique uses a constrained Delaunay triangulation to produce the mesh, which is too expensive (O(n^3) IIRC) to compute in real time. You need a faster technique, which is really tricky because it has to preserve curves (splitting when convex hulls intersect) and deal with self-intersection. Most of the work in Pathfinder 2 has gone into optimizing this step. In practice people usually use the stencil buffer to compute the fill rule, which hurts performance as it effectively computes the winding number from scratch for each pixel.
The good news is that it's quite possible to render glyphs quickly and with excellent antialiasing on the GPU using other techniques. There's lots of miscellaneous engineering work to do, but I'm pretty confident in Pathfinder's approach these days.
They do, but we're targeting Intel HD quality graphics, not gaming-oriented NVIDIA and AMD GPUs.
That said, even Intel GPUs can often deal with large numbers of draw calls just fine. It's mobile where they become a real issue.
Aggressive batching is still important to take maximum advantage of parallelism. If you're switching shaders for every rect you draw, then you frequently lose to the CPU.
With a compositor you're already drawing every pixel every frame on the GPU, whether it's just a cursor blinking or not. The WR approach basically only adds a negligible amount of vertex shading time.
Despite the huge fans and heatsinks on modern desktop GPUs, I presume that a GPU would still use less energy than a CPU for the same workload, yes? Do mobile GPUs have a sleep mode comparable to mobile CPUs? Completely agreed that some measurements would be nice.
I know it’s not officially released, so I’m hoping it gets fixed, but FF57 rips through my Mac’s battery life and runs insanely hot on simple JS apps. I still use it daily because it generally works, but there are a few apps I just have to go to Chrome for.
Your browser is doing ~60fps rendering on the GPU already, it's just doing it much less efficiently. This does less work on the CPU _and_ less work on the GPU, for the same result.
Games eat a lot of power by rendering this way. So, definitely a concern, and I'd want to see benchmarks before switching from Safari on my macbook.
That said, GPUs are pretty clever about this. A big chunk of power consumption comes from IO. i.e., moving data from the GPU to the off-chip DRAM. Mobile GPUs optimize for static scenes where nothing is changing by keeping hashes of small blocks (say 32x32 pixels) of the frambuffer in on-chip memory. If the hash of a block hasn't changed, they don't bother re-writing that block into the framebuffer. But the GPU is still ends up running the shaders every frame.
I've been working professionally with Rust for a year now. When I got over the first wall, it has become the best tool I've had for creating backend applications. I have history with at least nine different languages during my professional career, but nothing comes close giving the confidence and ergonomics than the tools Rust ecosystem provides.
Firefox, especially the new Quantum version is awesome. But Rust as a side product might be the best thing Mozilla brought us. I'm truly thankful for that.
Don’t know about measures. But eventually, after a couple stable FF versions are released with that new renderer, I’d expect positive impact.
GPUs are much more power efficient per FLOP. E.g. in my desktop PC, theoretical limit for the CPU is 32 FLOP/cycle * 4 cores * 3.2 GHz = 400 GFLOPS, for the GPU the theoretical limit is 2.3 TFLOPS. TDP for them is 84W CPU, 120W GPU.
A GPU has vast majority of transistors actually doing math, while in a CPU core, large percentage of these transistors are doing something else. Cache synchronization/invalidation, instructions reordering, branch prediction, indirect branch prediction (GPU has none of that), instruction fetch and decode (for GPU that’s shared between a group of cores who execute same instructions in lockstep).
Actually, virtually every device the average grade consumer uses, has a GPU. For instance, even Atom processors have GPUs. Granted, they don't have as much cores as a full-fledged nVidia GPU, nor as much dedicated memory, but they are still GPU with several tens of cores and specialized APIs that were designed specifically for the tasks at hand. Plus, they offload (ish) the CPU.
Not an expert, but I feel that that was more of an analogy/image to give what they were aiming for. The real objective is not 60fps, the real objective is to use the GPU to do tasks that it was designed for. Plain and simple. This however, gives the user a smoother experience, and 60 fps generally gives a noticeable difference.
As everyone said, 60fps is not the destination but merely a waypoint. It's a good goal, considering 99% of screens that are in use today refresh at 60 Hz or their regional equivalent. Higher refresh rates are next.
The WebRender folks are well aware that higher framerates are the future. Here's a tweet from Jack Moffitt today, a Servo engineer (and Servo's technical lead, I believe): https://twitter.com/metajack/status/917784559143522306
"People talk about 60fps like it's the end game, but VR needs 90fps, and Apple is at 120. Resolution also increasing. GPUs are the only way. Servo can't just speed up today's web for today's machines. We have to build scalable solutions that can solve tomorrow's problems."
To address your second point, you seem to be saying that missing the frame budget once and then compositing the rest of the time would be better than missing the frame budget every time.
That is certainly true, but a) the cases where you can do everything as a compositor optimization are very few (transform and opacity mostly) so aside from a few fast paths you'd miss your frame budget all the time there too, and b) we have a lot of examples of web pages that are slow on CPU renderers and very fast on WebRender and very few examples of the opposite aside from constructed edge case benchmarks. Those we have found had solutions and I suspect the other cases will too.
As resolution and framerate scale, CPUs cannot keep up. GPUs are the only practical path forward.
I remember reading at some point that WebRender could actually be isolated relatively easily and then applied to basically any browser. That sort of already took place, going from Servo over into Gecko.
So, it might actually turn into somewhat of a pseudo-standard.
We're not as other people have said in other comments. On normal content you can often see WebRender hit 200+ fps if you don't lock it to the frame rate. To see this for yourself, run Servo on something with -Z wr-stats which will show you a performance overlay.
I don't think, they are obsessed with 60 FPS, that's just what for most people is synonymous to a smooth experience and is often not met by browsers at this point in time.
In the video, he says 500 FPS, but assuming there's no more complicated formula behind this, I think it would actually be 2174 FPS. (0.46 ms GPU time per frame -> 1/0.00046s = 2173.913 FPS)
Vulkan has been a consideration from the earliest architecturing steps done in WebRender. So, the internal pipelines are all set up to be mapped to Vulkan's pipelines.
It's actually OpenGL which fits less into the architecture, but it's still easier to just bundle WebRender's pipelines all together and then throw that into OpenGL.
I have WebRender working on Linux with Intel 5500 integrated graphics. Hardware acceleration is still a bit glitchy though I'm afraid (with or without WebRender).
To enable, toggle 'layers.acceleration.force-enabled' as well as 'gfx.webrender.enabled'
edit: It's also working through my Nvidia 950m (through bumblebee), although subjectively it seems to have a little more lag this way.
Take it from a Thinkpad X60 owner, the Intel GPUs from that era are absolute trash. They don't support OpenGL 3.0 on any platform (in fact, they didn't gain 3.0+ support until Sandy Bridge in 2011(!)) so don't expect any of this recent GPU-centric work (which seems to be targeting OpenGL 3.0+) to work on these GPUs. It would probably work just fine on the contemporary Radeon and GeForce cards since they support OpenGL 3.3.
Article educates developers about web rendering through transparent explanation, clear framing, and accessible reasoning. Supports knowledge-sharing and technical literacy as dimensions of freedom of expression.
FW Ratio: 57%
Observable Facts
Article authored by 'Lin Clark' with publication date (October 10, 2017), supporting editorial accountability.
Content published without paywall, login, or registration barriers on Mozilla Hacks open-access platform.
Article uses clear structural explanation: breaks complex concepts into sequential steps with pedagogical analogies and explicit reasoning chains.
Article categorized as 'Code Cartoons' and 'Featured Article,' indicating editorial curation and transparent content classification.
Inferences
Open publication model and clear authorship demonstrate structural commitment to freedom of expression and information access.
Detailed educational framing supports the right to express and receive technical knowledge without gatekeeping.
Mozilla Foundation non-profit ownership and open publication model align with public interest expression over profit-driven speech restrictions.
Article provides comprehensive technical education about web rendering, GPU parallelism, and graphics programming, removing barriers to technical knowledge access.
Article provides detailed technical education about WebRender and browser rendering, indirectly supporting developer empowerment and capability to engage in skilled technical work.
FW Ratio: 50%
Observable Facts
Article provides detailed technical instruction about Firefox WebRender architecture, GPU rendering, and performance optimization.
Inferences
Technical education enables developers to understand and work with modern rendering technologies, supporting right to engage in technical work.
Article promotes open web standards and cross-browser interoperability principles, supporting development of favorable international order for web technology.
FW Ratio: 50%
Observable Facts
Article discusses Firefox and WebRender as cross-platform technologies serving global web ecosystem.
Inferences
Promotion of open rendering technology supports favorable international order for web access across platforms and regions.
Site implements Google Analytics and GTM tracking with UTM parameter removal utility, indicating awareness of privacy concerns but continued analytics deployment.
Terms of Service
—
Terms of service not observable in provided content.
Identity & Mission
Mission
+0.20
Article 19 Article 27
Mozilla's stated mission around open web and developer empowerment aligns with knowledge-sharing and technical security education.
Editorial Code
+0.05
Article 19
Technical blog format with clear author attribution and date stamps supports editorial transparency.
Ownership
+0.10
Article 19
Mozilla Foundation ownership as non-profit organization supports commitment to public interest over profit-driven content.
Access & Distribution
Access Model
+0.15
Article 26
Open access technical content published without paywall or registration barrier.
Ad/Tracking
-0.10
Article 12
Google Analytics and GTM tracking present on page reduces privacy score despite Mozilla's privacy advocacy.
Domain mission supports developer empowerment and open-source contribution (mission +0.2 DCP). Combined with URL-level scientific contribution: final S = 0.35.
Page includes Google Analytics and Google Tag Manager tracking codes. DCP privacy modifier (+0.15) reflects UTM parameter removal utility, partially mitigating impact of continued tracking.
build 1ad9551+j7zs · deployed 2026-03-02 09:09 UTC · evaluated 2026-03-02 11:31:12 UTC
Support HN HRCB
Each evaluation uses real API credits. HN HRCB runs on donations — no ads, no paywalls.
If you find it useful, please consider helping keep it running.