A reading of millions of live samples from our UK workstation fleet — what professional CAD, BIM and rendering machines really pull at the wall socket.
Open any GPU spec sheet and you will find a single large number, printed in bold: total board power, somewhere between 70 and 600 watts. It is a ceiling — the wattage the silicon will tolerate before it starts throttling itself. In professional workstation use, it is rarely reached. The workloads that do reach it — AI training, cryptocurrency mining — run hardware flat-out, around the clock. CAD, BIM and rendering, the workloads our customers actually run, look nothing like that.
For the past several months, we have been recording wall-socket power continuously across our UK workstation fleet — that is millions of measurements and growing. The data confirms what we always suspected: even our most demanding silicon runs, on average, at a fraction of its rated TDP.
The cloud, however, bills as if the opposite were true. Flat monthly rates quietly assume every machine — server, workstation, virtual desktop — runs at its rated TDP every working hour. Specifications and quotes look high on paper because providers have to cover the worst case: a customer who saturates the silicon, twenty-four hours a day, for the full term of the contract. Most workstation users don’t use their machines that way; ours certainly don’t. This report makes that gap visible — openly, with the underlying data and methodology published alongside.
This is normal. CAD, BIM and rendering are bursty by nature — quiet most of the day, with short peaks when something heavy is happening. The shape of the data on this page is what professional AEC work looks like. The problem isn’t the workload; it’s the billing model. Most cloud workstation contracts charge a flat monthly rate built on a worst-case assumption: the machine running flat-out, every hour, every day. The number on your invoice doesn’t move whether someone was working at the screen or the office was dark.
Real design work happens in a working week. The bill, on a flat contract, keeps ticking through evenings, nights and weekends — long after the office is empty. The bar below splits the week the way your contract does.
Seats occupied, applications open, real work happening.
Evenings, nights, weekends. The machine is on. Nobody is at it.
Roughly three out of every four hours on a static contract are paying for a machine nobody is sitting at.
Abstractions aside, here is one of those machines. An RTX 5070 in our UK fleet, every reading streamed to the chart on the right. The dashed line above is its rated 250 W envelope — what the spec sheet says it can pull. The shaded curve is what the silicon actually does pull when someone is using it. Watch how rarely the two meet.
An RTX 5070 drew, on average, just 38% of its 250 W rated TDP.
A workstation drawing 36% of its rated power isn’t under-spec’d — it’s doing exactly what professional CAD, BIM and rendering work asks of it. These workloads are bursty by design: short, intense peaks when geometry loads or a render kicks off, long quiet plateaus the rest of the time. The four figures below describe that rhythm — CPU in single digits, GPU memory barely tapped, RAM resident but unused, and at any moment four in every five seats are idle. Tap a figure for the day-by-day shape. The hardware isn’t wasted; an always-on contract is.
Three individual machines plotted against the fleet average. The dashed line is what flat-priced cloud bills you for, regardless of activity.
Average those daily figures across a working week and the 168 / 40 split from earlier becomes visible at a glance: a narrow band of mid-day activity, Monday to Friday — surrounded on every side by hours when the machines are on but nobody is at them. The grid below averages the 150-machine sample across the week, one cell per hour, darker for more power.
Each of the figures below comes from the same heatmap, drawn from 60 days of telemetry across the 150-machine sample. They describe a working week that’s remarkably consistent — and a baseline that never drops to zero.
59% of the fleet shows activity. Every other slot in the week is at or below 57%. Hump-day really is the busy day.
Activity jumps by about a fifth between 08:00 and 09:00 every weekday. No earlier ramp — architects start work at 09:00.
Fleet activity drops 4–5% from 12:00 to 13:00 every weekday, then recovers. The UK architecture industry takes lunch at one.
By 18:00 on Friday, fleet activity has already fallen to its weekend baseline. Mon–Thu hold above that mark until ~19:00.
Saturday and Sunday register the same activity at 11:00 as they do at 04:00. No ramp, no dip, no peak — just the background hum.
Activity never falls below 34% — even at 04:00 on a Sunday. The fleet has no off-state on a flat contract; that floor is the auto-shutdown target.
One heatmap is a fleet averaged into one cell per hour. Here, instead, are the individual machines themselves — 150 of them from five different UK customers, each line one workstation, each cell one day’s mean. If the gap between rated and used were a quirk of a particular site, GPU or workload, you’d see it scatter. It doesn’t. Pick a GPU type to focus the lens.
One bar per workstation, sorted by value. On every tier except the 5090, compute averages a single-digit fraction of a percent across 30 days. Switch the metric toggle to see peak readings — the cores do fire, but in brief bursts, not as a sustained load. That’s how CAD, BIM and rendering work has always behaved.
The fleet mean hides a wide spread. Some workstations sit at memory loads under 1% all month. Others run up to 80% of their VRAM full. The panel below plots every seat in the sample for the selected GPU — one bar per machine, sorted by value. Switch the toggle to compare against compute utilisation. The cards spend their time holding data far more than they spend crunching it.
The headline number on a GPU spec sheet is its total board power. For each machine in the sample, we took the single highest power reading across the 30-day window and asked: did it ever come within 90% of that ceiling?
Every reading on one table, measured against the card’s rated capacity. The shape is the story: bursty by design, quiet between bursts — and consistent across tiers. Click any card’s name to focus its column; hover the ? next to a row for what that row actually measures.
The RTX 5090 is the only tier where average load crosses 10% — production rendering, ray-tracing and AI workflows give the cores something sustained to do. Where a 5090 is specified, it earns its keep.
On the 5070, 5080 and SFF, average load sits at 5–7%. That isn’t under-spec or over-spec — it’s the shape of the work. The card’s rated capacity is reserved for the peak; outside the peak, the silicon coasts. Architecture workloads have always looked like this.
Static cloud contracts charge as if every card sat at the peak all day. The data above says the work doesn’t. The next chapter turns the difference between “peak” and “real” into pounds on the invoice.
Watts pulled at the wall socket are only half the carbon story. The other half is which power station produced them — and that number changes by the minute. Every PC sat at someone’s desk today is drawing from the national grid mix below. A Computle workstation isn’t. Its kWh comes from direct power-purchase agreements at all three of our UK sites.
Take that grid-attached kWh and stretch it over a working year. A typical RTX 5070 drawing the 95 W fleet mean for 8,760 hours pulls 832 kWh. Run that on the public UK grid, where the average kWh today still carries 155 grams of CO₂, and the scope-2 number falls out one way. Run the same workload on a Computle site buying renewable power directly, and it falls out another.
What your scope-2 report would have to declare — a workstation drawing from the public UK grid for a full year.
Same workload, renewable-supplied UK sites. Each kWh logged with the grid mix at the time it was drawn.
100 seats × 95 W fleet mean × 8 760 h × 155 g/kWh UK grid avg = 12.9 tonnes. Direct PPA renewable supply takes that to zero.
Only one in four kWh on the bill is consumed during the standard work day. The other 75% lands in evenings, nights and weekends.
More energy is consumed on Saturdays and Sundays — when the office is empty — than during the full Mon–Fri working day.
Across the 47M+ samples in the dataset and counting, six in ten readings show CPU under 5% AND GPU under 1%. Idle is the default state of a cloud workstation.
Even between 09:00 and 17:00 Monday to Friday, almost half the fleet is at idle. The office may be busy; the silicon is not.
At 01:00 on Sunday — the quietest hour of the week — the fleet still draws 73% of its busiest weekday hour. The machines never really sleep.
Before the 2022 energy crisis, the inefficiency in the data on this page was tolerable: electricity was a small line in a cloud bill, and most customers didn’t question it. After Russia’s invasion of Ukraine, wholesale electricity prices in Europe spiked and stayed elevated. The gap between what the silicon actually draws and what static contracts bill for stopped being a curiosity and started being the difference between a manageable bill and an alarming one.
The fix needed scale. You can’t buy power at wholesale as a small operator; you need the volume to make a direct relationship work. We are now at that size. Computle Flex is the billing model that follows from buying at wholesale: a fixed monthly fee for the workstation, plus the kilowatt-hours your team actually draws — passed straight through. The next section turns the data above into what that means on your invoice.
Three things move when a fleet shifts from a static cloud contract onto Flex, and each one is backed by the data already on this page. The numbers below are for a mid-spec RTX 5070 workstation drawing the measured 95 W fleet average, on UK (B).
Static contracts bill ~163 W per seat (half rated TDP). Real workloads pull ~95 W. Switching to a metered bill removes the gap and takes about 40% off the energy line straight away.
Of the energy that’s left, the machines are only actively used for 40 of the 168 hours in a week. Auto-shutdown out of hours strips another ~70% off the metered energy line.
Computle buys its electricity on a wholesale, time-of-use basis — not a flat retail tariff. That’s the precondition for billing customers on actual usage rather than worst-case headroom.
Same telemetry behind every chart above feeds the calculator below. Pick the silicon, the memory, the storage and a UK site — all three on direct PPA renewable. Indicative numbers from the published list rates; your final quote depends on team size, term and commitment.
| Line item | Static | Flex | Difference |
|---|---|---|---|
Hardware CPU · GPU · RAM · storage · management — amortised monthly | £84 | £84 | — |
Footprint Rack reservation at UK (B) · £132/kW · 0.21 kW reserved | £28 | £28 | — |
Energy £0.412 / kWh · static assumes 730 h/mo · Flex bills the 173 h/mo your team works | £76 153 kWh | £18 36 kWh | −£58 |
| Per workstation, per month | £187 | £130 | −£58 |
| × Team of 20 | £3,748 | £2,596 | −£1,152 |
Metered billing only works if both sides see the same numbers. Flex separates the workstation from its energy line and uses the same telemetry feeding the charts above to set, show, and reconcile the kWh you’re charged for. Four steps, repeating every quarter.
The first 30 days establish a per-seat mean from the same telemetry feeding the charts above. That mean sets the kWh estimate on your first invoice.
Wall-power data streams every day to a customer-facing dashboard. You see what we see — per seat, per machine, per hour.
One bill covering hardware, footprint and the modelled energy line for the three months ahead. Predictable cashflow.
At quarter end, the modelled energy is reconciled against measured actuals. Next quarter’s estimate adjusts up or down to match.
Cloud workstations were priced for a world of cheap electricity. That world ended in 2022. Computle Flex separates hardware from energy — a fixed monthly fee for the workstation, plus the kWh you actually draw at wholesale, passed straight through. The savings in the data above stop landing in our margin and start landing in your bill.
A two-page PDF data sheet with the headline numbers, the report’s hero image, and PNG exports of every chart on this page. Free to reproduce with attribution to Computle.
© Computle Ltd · 2026. Reproduction permitted for press and editorial use with attribution: “Source: Computle Workstation Energy Report, May 2026.” Higher-resolution assets and interview requests: [email protected].
gpu_power_limit_w. Machines reported as “never hit 90% TDP” had no sustained reading at or above that threshold. (6) Busiest / quietest hour. “Busiest hour” (Wed 12:00) is measured by the share of samples with non-idle utilisation. “Quietest vs busiest hour” ratio (73%) is measured by total fleet wall-power summed by (day-of-week × hour) across a 60-day window. (7) Carbon intensity. UK national-grid emissions intensity is sourced live from the National Grid ESO Carbon Intensity API. The 155 g CO₂/kWh figure used for annualised comparisons is the 2024–25 UK national average; the live figure displayed elsewhere on this page updates every five minutes. (8) Savings estimates. The indicative £ savings shown in the Pricing section assume a mid-spec RTX 5070 seat at the measured 95 W fleet mean, on UK (B), with the published Flex list rate for energy and footprint. All numbers are illustrative and subject to availability; this page is not a quotation or a binding offer. Real quotes depend on team size, contract term and committed volume. (9) Per-tier stability. The 12-week stability claim is computed as weekly per-seat means within each GPU tier. Fleet composition changed across the window (active-machine count grew from 77 to 133); the claim applies within-tier, not across the full fleet. (10) Renewable supply. All three Computle UK sites are matched 100% renewable via direct power-purchase agreement (PPA). Scope-2 operational emissions for a workstation on a Computle UK site are zero by this measure. (11) Per-GPU 30-day mean GPU power. RTX 4000 SFF 7 W mean · 70 W TDP · 23 machines · RTX 5070 8 W mean · 250 W TDP · 67 machines · RTX 5080 17 W mean · 360 W TDP · 46 machines · RTX 5090 71 W mean · 600 W TDP · 12 machines.