A Computle report13 May 2026

What workstations actually draw.

A reading of millions of live samples from our UK workstation fleet — what professional CAD, BIM and rendering machines really pull at the wall socket.

Days monitored
days
Telemetry
samples
Renewable-powered sites
3
UK datacentres
Low-carbon mix
%
The opening

A ceiling on the spec sheet. Rarely touched in real work.

Open any GPU spec sheet and you will find a single large number, printed in bold: total board power, somewhere between 70 and 600 watts. It is a ceiling — the wattage the silicon will tolerate before it starts throttling itself. In professional workstation use, it is rarely reached. The workloads that do reach it — AI training, cryptocurrency mining — run hardware flat-out, around the clock. CAD, BIM and rendering, the workloads our customers actually run, look nothing like that.

For the past several months, we have been recording wall-socket power continuously across our UK workstation fleet — that is millions of measurements and growing. The data confirms what we always suspected: even our most demanding silicon runs, on average, at a fraction of its rated TDP.

The cloud, however, bills as if the opposite were true. Flat monthly rates quietly assume every machine — server, workstation, virtual desktop — runs at its rated TDP every working hour. Specifications and quotes look high on paper because providers have to cover the worst case: a customer who saturates the silicon, twenty-four hours a day, for the full term of the contract. Most workstation users don’t use their machines that way; ours certainly don’t. This report makes that gap visible — openly, with the underlying data and methodology published alongside.

Why this matters to your bill

168 hours billed. 40 hours of work.

This is normal. CAD, BIM and rendering are bursty by nature — quiet most of the day, with short peaks when something heavy is happening. The shape of the data on this page is what professional AEC work looks like. The problem isn’t the workload; it’s the billing model. Most cloud workstation contracts charge a flat monthly rate built on a worst-case assumption: the machine running flat-out, every hour, every day. The number on your invoice doesn’t move whether someone was working at the screen or the office was dark.

Real design work happens in a working week. The bill, on a flat contract, keeps ticking through evenings, nights and weekends — long after the office is empty. The bar below splits the week the way your contract does.

A workstation’s week, billed168 h
40 h · USED
128 h · OFF-SHIFT, STILL BILLED
Mon 09:00 ─ Fri 17:00Evenings · nights · weekends
Hours of real work
40h / week

Seats occupied, applications open, real work happening.

Hours off-shift, still on
128h / week

Evenings, nights, weekends. The machine is on. Nobody is at it.

Share of the bill paying for those idle hours
3 in 4hours

Roughly three out of every four hours on a static contract are paying for a machine nobody is sitting at.

Live wall power

A real machine, watched in real time.

Abstractions aside, here is one of those machines. An RTX 5070 in our UK fleet, every reading streamed to the chart on the right. The dashed line above is its rated 250 W envelope — what the spec sheet says it can pull. The shaded curve is what the silicon actually does pull when someone is using it. Watch how rarely the two meet.

87.0
W
● live · RTX 5070 · one machine
Idle baseline near 87 watts. Bursts when the user rotates a viewport or fires a render preview.
The number that started this
An RTX 5070 drew, on average, just 38% of its 250 W rated TDP.
30-day fleet mean · UK
Utilisation

What the fleet does most of the day.

A workstation drawing 36% of its rated power isn’t under-spec’d — it’s doing exactly what professional CAD, BIM and rendering work asks of it. These workloads are bursty by design: short, intense peaks when geometry loads or a render kicks off, long quiet plateaus the rest of the time. The four figures below describe that rhythm — CPU in single digits, GPU memory barely tapped, RAM resident but unused, and at any moment four in every five seats are idle. Tap a figure for the day-by-day shape. The hardware isn’t wasted; an always-on contract is.

Representative day · Fleet at zero utilisation
three real machines + fleet average

Three individual machines plotted against the fleet average. The dashed line is what flat-priced cloud bills you for, regardless of activity.

Hour-by-day pattern

One week, hour by hour.

Average those daily figures across a working week and the 168 / 40 split from earlier becomes visible at a glance: a narrow band of mid-day activity, Monday to Friday — surrounded on every side by hours when the machines are on but nobody is at them. The grid below averages the 150-machine sample across the week, one cell per hour, darker for more power.

00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
21
22
23
The architect’s week, in telemetry

Six things the data says about how AEC teams actually work.

Each of the figures below comes from the same heatmap, drawn from 60 days of telemetry across the 150-machine sample. They describe a working week that’s remarkably consistent — and a baseline that never drops to zero.

Busiest hour of the week
Wed · 12:00

59% of the fleet shows activity. Every other slot in the week is at or below 57%. Hump-day really is the busy day.

The 09:00 start
9am sharp

Activity jumps by about a fifth between 08:00 and 09:00 every weekday. No earlier ramp — architects start work at 09:00.

The lunch dip
13:00

Fleet activity drops 4–5% from 12:00 to 13:00 every weekday, then recovers. The UK architecture industry takes lunch at one.

Friday wind-down
1 hour earlier

By 18:00 on Friday, fleet activity has already fallen to its weekend baseline. Mon–Thu hold above that mark until ~19:00.

Weekends are flat
~35%

Saturday and Sunday register the same activity at 11:00 as they do at 04:00. No ramp, no dip, no peak — just the background hum.

The machines never sleep
34% baseline

Activity never falls below 34% — even at 04:00 on a Sunday. The fleet has no off-state on a flat contract; that floor is the auto-shutdown target.

Across the fleet · 30 days

A 150-machine sample. Five customers. One pattern.

One heatmap is a fleet averaged into one cell per hour. Here, instead, are the individual machines themselves — 150 of them from five different UK customers, each line one workstation, each cell one day’s mean. If the gap between rated and used were a quirk of a particular site, GPU or workload, you’d see it scatter. It doesn’t. Pick a GPU type to focus the lens.

Machines
150
GPU types
4
Fleet mean
13W
Range
1303W
RTX 4000 SFF · static bill @ 35 WRTX 5070 · static bill @ 125 WRTX 5080 · static bill @ 180 WRTX 5090 · static bill @ 300 WFleet mean (actual)
Inside the silicon · 1 of 3

GPU compute is barely touched.

One bar per workstation, sorted by value. On every tier except the 5090, compute averages a single-digit fraction of a percent across 30 days. Switch the metric toggle to see peak readings — the cores do fire, but in brief bursts, not as a sustained load. That’s how CAD, BIM and rendering work has always behaved.

How to read
Share of the GPU's VRAM in use, averaged across every reading in the 30-day window. Higher means the card's memory carries more of the workload — bigger scenes, denser models, point clouds. Lower means most of the memory you paid for is empty most of the time.
How to read
The highest memory reading per seat, excluding the top-and-bottom outliers. Reads as 'what a busy day looks like'. Near 100% means the card fills up sometimes; well below means there's headroom that never gets touched.
How to read
Share of the GPU's processing cores doing actual work, averaged. A reading of 1% means that, almost every second of every day, fewer than one in a hundred of the cores are active. This is the number that justifies (or doesn't) the spec sheet.
How to read
The highest compute reading per seat, excluding outliers. A peak well below 100% means the cores never fully fire — even at the moment of a render, simulation or export, there's capacity left unused.
RTX 4000 SFF · 70 W
24 seats
The drafting seat
Tight, low cluster
0%hover any bar100%
Lowest
0.0%
Median
0.3%
Top
2.5%
RTX 5070 · 250 W
68 seats
The everyday architect
Long tail of busier seats
0%hover any bar100%
Lowest
0.0%
Median
0.5%
Top
5.4%
RTX 5080 · 360 W
46 seats
The visualisation seat
Heavy tail pushes past 70%
0%hover any bar100%
Lowest
0.0%
Median
0.5%
Top
12.2%
RTX 5090 · 600 W
12 seats
The render seat
Whole cohort runs warm
0%hover any bar100%
Lowest
0.0%
Median
14.3%
Top
25.0%
Inside the silicon · 2 of 3

Memory does more of the work than the cores.

The fleet mean hides a wide spread. Some workstations sit at memory loads under 1% all month. Others run up to 80% of their VRAM full. The panel below plots every seat in the sample for the selected GPU — one bar per machine, sorted by value. Switch the toggle to compare against compute utilisation. The cards spend their time holding data far more than they spend crunching it.

How to read
Share of the GPU's VRAM in use, averaged across every reading in the 30-day window. Higher means the card's memory carries more of the workload — bigger scenes, denser models, point clouds. Lower means most of the memory you paid for is empty most of the time.
How to read
The highest memory reading per seat, excluding the top-and-bottom outliers. Reads as 'what a busy day looks like'. Near 100% means the card fills up sometimes; well below means there's headroom that never gets touched.
How to read
Share of the GPU's processing cores doing actual work, averaged. A reading of 1% means that, almost every second of every day, fewer than one in a hundred of the cores are active. This is the number that justifies (or doesn't) the spec sheet.
How to read
The highest compute reading per seat, excluding outliers. A peak well below 100% means the cores never fully fire — even at the moment of a render, simulation or export, there's capacity left unused.
RTX 4000 SFF · 70 W
24 seats
The drafting seat
Tight, low cluster
0%hover any bar100%
Lowest
0.1%
Median
5.8%
Top
18.1%
RTX 5070 · 250 W
68 seats
The everyday architect
Long tail of busier seats
0%hover any bar100%
Lowest
0.2%
Median
11.5%
Top
68.0%
RTX 5080 · 360 W
46 seats
The visualisation seat
Heavy tail pushes past 70%
0%hover any bar100%
Lowest
0.2%
Median
8.7%
Top
72.4%
RTX 5090 · 600 W
12 seats
The render seat
Whole cohort runs warm
0%hover any bar100%
Lowest
0.2%
Median
27.3%
Top
81.3%
Inside the silicon · 3 of 3

Even at peak, the rated TDP is rarely reached.

The headline number on a GPU spec sheet is its total board power. For each machine in the sample, we took the single highest power reading across the 30-day window and asked: did it ever come within 90% of that ceiling?

RTX 4000 SFF
87%
of seats never hit 90% TDP
Avg peak as % of TDP: 48%
RTX 5070
90%
of seats never hit 90% TDP
Avg peak as % of TDP: 38%
RTX 5080
89%
of seats never hit 90% TDP
Avg peak as % of TDP: 44%
RTX 5090
75%
of seats never hit 90% TDP
Avg peak as % of TDP: 62%
The summary

Six readings. Four cards. One workload shape.

Every reading on one table, measured against the card’s rated capacity. The shape is the story: bursty by design, quiet between bursts — and consistent across tiers. Click any card’s name to focus its column; hover the ? next to a row for what that row actually measures.

Measure
Memory · mean?
How to read
How much of the card's VRAM is occupied, averaged across the month. Higher means the GPU's memory is doing more — holding bigger scenes, denser materials, point clouds. Lower means most of the VRAM you paid for is empty most of the time.
7%
14%
12%
35%
Memory · peak?
How to read
The single highest VRAM use any seat hit during the 30-day window. If this is near 100% the card is sometimes full and the seat may be right-sized (or under-spec'd). If it's well below 100%, there's permanent headroom that's never used.
31%
52%
49%
70%
Compute · mean?
How to read
How busy the GPU's processing cores are, averaged. A reading of 1% means that, almost every second of every day, fewer than one in a hundred of the cores are doing work. This is the number that justifies (or doesn't) the spec sheet of a high-end card.
0.9%
0.8%
1.3%
12%
Compute · peak?
How to read
The single highest compute reading any seat hit. A peak well under 100% means the cores never fully fire — even at the moment of a render or simulation, there's still capacity left untouched on every tier.
76%
64%
62%
81%
GPU power · % of rated TDP?
How to read
Average GPU wattage relative to the card's rated TDP (the headline number on the box). 3% means the card is, on average, drawing about a thirtieth of what the spec sheet says it can pull. The bigger the gap, the more headroom you bought that the work never uses.
10%
3%
5%
11%
Average load vs rated capacity?
How to read
Average GPU activity (memory + compute + power) expressed against the card's rated spec. The number is low on every tier because professional CAD, BIM and rendering work is bursty by design — the spec sheet exists for the peak (the render, the heavy scene), and the rest of the time the card coasts. That shape is normal. What changes from one billing model to another is whether you're charged for the peak or for the average.
6%
6%
6%
19%
The flagship works the hardest

The RTX 5090 is the only tier where average load crosses 10% — production rendering, ray-tracing and AI workflows give the cores something sustained to do. Where a 5090 is specified, it earns its keep.

Mid-range follows the work

On the 5070, 5080 and SFF, average load sits at 5–7%. That isn’t under-spec or over-spec — it’s the shape of the work. The card’s rated capacity is reserved for the peak; outside the peak, the silicon coasts. Architecture workloads have always looked like this.

Where the bill catches up

Static cloud contracts charge as if every card sat at the peak all day. The data above says the work doesn’t. The next chapter turns the difference between “peak” and “real” into pounds on the invoice.

The grid behind the bill

Every kWh has a number behind it.

Watts pulled at the wall socket are only half the carbon story. The other half is which power station produced them — and that number changes by the minute. Every PC sat at someone’s desk today is drawing from the national grid mix below. A Computle workstation isn’t. Its kWh comes from direct power-purchase agreements at all three of our UK sites.

A year, two grids

The same machine. Two grids. Two footprints.

Take that grid-attached kWh and stretch it over a working year. A typical RTX 5070 drawing the 95 W fleet mean for 8,760 hours pulls 832 kWh. Run that on the public UK grid, where the average kWh today still carries 155 grams of CO₂, and the scope-2 number falls out one way. Run the same workload on a Computle site buying renewable power directly, and it falls out another.

Public UK grid
129
kg CO₂ / year
832 kWh × 155 g CO₂ / kWh

What your scope-2 report would have to declare — a workstation drawing from the public UK grid for a full year.

Computle renewable
0
kg CO₂ / year
832 kWh × renewable supply

Same workload, renewable-supplied UK sites. Each kWh logged with the grid mix at the time it was drawn.

CO₂ avoided per 100 seats / year
~13 t

100 seats × 95 W fleet mean × 8 760 h × 155 g/kWh UK grid avg = 12.9 tonnes. Direct PPA renewable supply takes that to zero.

Of fleet energy used Mon-Fri 09–17
25%

Only one in four kWh on the bill is consumed during the standard work day. The other 75% lands in evenings, nights and weekends.

Of fleet energy used on weekends
28.5%

More energy is consumed on Saturdays and Sundays — when the office is empty — than during the full Mon–Fri working day.

Fleet idle, all the time
59.9%

Across the 47M+ samples in the dataset and counting, six in ten readings show CPU under 5% AND GPU under 1%. Idle is the default state of a cloud workstation.

Idle during work hours
48.7%

Even between 09:00 and 17:00 Monday to Friday, almost half the fleet is at idle. The office may be busy; the silicon is not.

Quietest hour vs busiest hour
73%

At 01:00 on Sunday — the quietest hour of the week — the fleet still draws 73% of its busiest weekday hour. The machines never really sleep.

Why we built Flex

Cheap power ended in 2022. Cloud pricing didn’t.

Before the 2022 energy crisis, the inefficiency in the data on this page was tolerable: electricity was a small line in a cloud bill, and most customers didn’t question it. After Russia’s invasion of Ukraine, wholesale electricity prices in Europe spiked and stayed elevated. The gap between what the silicon actually draws and what static contracts bill for stopped being a curiosity and started being the difference between a manageable bill and an alarming one.

The fix needed scale. You can’t buy power at wholesale as a small operator; you need the volume to make a direct relationship work. We are now at that size. Computle Flex is the billing model that follows from buying at wholesale: a fixed monthly fee for the workstation, plus the kilowatt-hours your team actually draws — passed straight through. The next section turns the data above into what that means on your invoice.

What this actually saves a typical company.

Three things move when a fleet shifts from a static cloud contract onto Flex, and each one is backed by the data already on this page. The numbers below are for a mid-spec RTX 5070 workstation drawing the measured 95 W fleet average, on UK (B).

1 · Stop paying for the ceiling
≈ 40%

Static contracts bill ~163 W per seat (half rated TDP). Real workloads pull ~95 W. Switching to a metered bill removes the gap and takes about 40% off the energy line straight away.

2 · Stop paying off-shift
≈ 70% more

Of the energy that’s left, the machines are only actively used for 40 of the 168 hours in a week. Auto-shutdown out of hours strips another ~70% off the metered energy line.

3 · Bought at wholesale
Metered, not flat-rate

Computle buys its electricity on a wholesale, time-of-use basis — not a flat retail tariff. That’s the precondition for billing customers on actual usage rather than worst-case headroom.

10 seats · annual saving
≈ £4,300/ yr
vs the same fleet on a static cloud contract
50 seats · annual saving
≈ £21,500/ yr
typical mid-sized studio fleet
100 seats · over a 3-year term
≈ £128,000cumulative
same workloads, same kWh, different billing model

Spec a workstation. Estimate the monthly bill.

Same telemetry behind every chart above feeds the calculator below. Pick the silicon, the memory, the storage and a UK site — all three on direct PPA renewable. Indicative numbers from the published list rates; your final quote depends on team size, term and commitment.

Team size20
Hours per week, per user40 h
Static · billed 730 h / month
Current (fixed usage) pricing
£3,748/ mo
£187 per workstation
Flex · 173 h / month measured
£2,596/ mo
£130 per workstation
Estimated annual saving
£13,824/ yr
Your team would scale 31% more efficiently on Flex.
Breakdown · per workstation, per month
Three line items. Hardware and footprint are fixed; only energy moves with usage.
Line itemStaticFlexDifference
Hardware
CPU · GPU · RAM · storage · management — amortised monthly
£84£84
Footprint
Rack reservation at UK (B) · £132/kW · 0.21 kW reserved
£28£28
Energy
£0.412 / kWh · static assumes 730 h/mo · Flex bills the 173 h/mo your team works
£76
153 kWh
£18
36 kWh
−£58
Per workstation, per month£187£130−£58
× Team of 20£3,748£2,596−£1,152
How Flex bills you

Telemetry-anchored. Quarterly true-up. No surprises.

Metered billing only works if both sides see the same numbers. Flex separates the workstation from its energy line and uses the same telemetry feeding the charts above to set, show, and reconcile the kWh you’re charged for. Four steps, repeating every quarter.

01
Baseline

The first 30 days establish a per-seat mean from the same telemetry feeding the charts above. That mean sets the kWh estimate on your first invoice.

02
Monthly view

Wall-power data streams every day to a customer-facing dashboard. You see what we see — per seat, per machine, per hour.

03
Quarterly invoice

One bill covering hardware, footprint and the modelled energy line for the three months ahead. Predictable cashflow.

04
True-up

At quarter end, the modelled energy is reconciled against measured actuals. Next quarter’s estimate adjusts up or down to match.

Pay for the kWh you used. Not the kWh we assumed.

Cloud workstations were priced for a world of cheap electricity. That world ended in 2022. Computle Flex separates hardware from energy — a fixed monthly fee for the workstation, plus the kWh you actually draw at wholesale, passed straight through. The savings in the data above stop landing in our margin and start landing in your bill.

Share this reportLinkedInX / TwitterEmail
Press kit

Take the data with you.

A two-page PDF data sheet with the headline numbers, the report’s hero image, and PNG exports of every chart on this page. Free to reproduce with attribution to Computle.

© Computle Ltd · 2026. Reproduction permitted for press and editorial use with attribution: “Source: Computle Workstation Energy Report, May 2026.” Higher-resolution assets and interview requests: [email protected].

NOTES & METHODOLOGY. (1) Sample. Telemetry sampled from a cohort of 150 anonymised workstations spread across five UK customers. Per-machine identifiers and tenant assignment are withheld; daily means are computed from the underlying telemetry stream. The public dataset behind this page contains 47,000,000+ samples and counting, with the aggregator ingesting roughly 200,000 new samples per day from a 77-day rolling window. (2) Wall power. Wall power = (CPU package + GPU + 52 W system overhead) ÷ 0.88 PSU efficiency. CPU package power is read from Intel RAPL; GPU power from nvidia-smi. The 52 W overhead is a conservative system estimate (chipset, drives, fans, RAM); 0.88 is a typical workstation-class PSU efficiency. (3) Idle. “Idle” means CPU utilisation below 5% AND GPU compute utilisation below 1%, sampled at roughly 5-second intervals. “Active” is the inverse. These thresholds were chosen to exclude background OS noise while still counting any genuine application work. (4) Static-contract baseline. The 163 W/seat reference for static cloud-workstation contracts is 50% of the weighted-average rated TDP across the four GPU tiers in our sample, multiplied by 730 hours/month (always-on). Real static contracts vary; 50% × 730h is the most common — and conservative — convention seen in published list pricing. (5) Percent-of-TDP figures. Per-machine peak GPU power across a 30-day window divided by rated gpu_power_limit_w. Machines reported as “never hit 90% TDP” had no sustained reading at or above that threshold. (6) Busiest / quietest hour. “Busiest hour” (Wed 12:00) is measured by the share of samples with non-idle utilisation. “Quietest vs busiest hour” ratio (73%) is measured by total fleet wall-power summed by (day-of-week × hour) across a 60-day window. (7) Carbon intensity. UK national-grid emissions intensity is sourced live from the National Grid ESO Carbon Intensity API. The 155 g CO₂/kWh figure used for annualised comparisons is the 2024–25 UK national average; the live figure displayed elsewhere on this page updates every five minutes. (8) Savings estimates. The indicative £ savings shown in the Pricing section assume a mid-spec RTX 5070 seat at the measured 95 W fleet mean, on UK (B), with the published Flex list rate for energy and footprint. All numbers are illustrative and subject to availability; this page is not a quotation or a binding offer. Real quotes depend on team size, contract term and committed volume. (9) Per-tier stability. The 12-week stability claim is computed as weekly per-seat means within each GPU tier. Fleet composition changed across the window (active-machine count grew from 77 to 133); the claim applies within-tier, not across the full fleet. (10) Renewable supply. All three Computle UK sites are matched 100% renewable via direct power-purchase agreement (PPA). Scope-2 operational emissions for a workstation on a Computle UK site are zero by this measure. (11) Per-GPU 30-day mean GPU power. RTX 4000 SFF 7 W mean · 70 W TDP · 23 machines · RTX 5070 8 W mean · 250 W TDP · 67 machines · RTX 5080 17 W mean · 360 W TDP · 46 machines · RTX 5090 71 W mean · 600 W TDP · 12 machines.