Skip to content

Commit e9088c3

Browse files
Merge pull request #40 from ScalingIntelligence/ipw
Intelligence per Watt
2 parents f1469e3 + d9c2ec3 commit e9088c3

File tree

4 files changed

+109
-0
lines changed

4 files changed

+109
-0
lines changed

_blogs/ipw.md

Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
title: 'Intelligence Per Watt: A Study of Local Intelligence Efficiency'
3+
authors:
4+
- key: jonsaadfalcon
5+
equal: true
6+
affiliation: Stanford
7+
- name: Avanika Narayan
8+
equal: true
9+
affiliation: Stanford University
10+
- name: Hakki Orhun Akengin
11+
affiliation: Stanford University
12+
- name: J. Wes Griffin
13+
affiliation: Stanford University
14+
- name: Herumb Shandilya
15+
affiliation: Stanford University
16+
- key: adriangamarralafuente
17+
- name: Medhya Goel
18+
affiliation: Stanford University
19+
- name: Rebecca Joseph
20+
affiliation: Stanford University
21+
- key: shloknatarajan
22+
- name: Etash Kumar Guha
23+
affiliation: Stanford University
24+
- name: Shang Zhu
25+
affiliation: Together AI
26+
- name: Ben Athiwaratkun
27+
affiliation: Together AI
28+
- name: John Hennessy
29+
affiliation: Stanford University
30+
- key: azaliamirhoseini
31+
affiliation: Stanford University
32+
- name: Christopher Ré
33+
affiliation: Stanford University
34+
venue: preprint
35+
year: 2025
36+
date: 2025-11-11
37+
has_pdf: true
38+
redirect: https://hazyresearch.stanford.edu/blog/2025-11-11-ipw
39+
doi: 10.48550/arXiv.2511.07885
40+
tags:
41+
- machine learning
42+
- systems
43+
- hardware efficiency
44+
teaser: We introduce intelligence-per-watt (IPW) to measure how efficiently inference systems convert energy into useful computation. Local LMs accurately respond to 88.7% of single-turn chat and reasoning queries, with local intelligence efficiency improving 5.3x from 2023-2025.
45+
materials:
46+
- name: Paper
47+
url: https://arxiv.org/abs/2511.07885
48+
type: file-pdf
49+
- name: Codebase
50+
url: https://github.com/HazyResearch/intelligence-per-watt
51+
type: code
52+
---
53+

_pubs/ipw.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
---
2+
title: 'Intelligence Per Watt: A Study of Local Intelligence Efficiency'
3+
authors:
4+
- key: jonsaadfalcon
5+
equal: true
6+
affiliation: Stanford
7+
- name: Avanika Narayan
8+
equal: true
9+
affiliation: Stanford University
10+
- name: Hakki Orhun Akengin
11+
affiliation: Stanford University
12+
- name: J. Wes Griffin
13+
affiliation: Stanford University
14+
- name: Herumb Shandilya
15+
affiliation: Stanford University
16+
- key: adriangamarralafuente
17+
- name: Medhya Goel
18+
affiliation: Stanford University
19+
- name: Rebecca Joseph
20+
affiliation: Stanford University
21+
- key: shloknatarajan
22+
- name: Etash Kumar Guha
23+
affiliation: Stanford University
24+
- name: Shang Zhu
25+
affiliation: Together AI
26+
- name: Ben Athiwaratkun
27+
affiliation: Together AI
28+
- name: John Hennessy
29+
affiliation: Stanford University
30+
- key: azaliamirhoseini
31+
affiliation: Stanford University
32+
- name: Christopher Ré
33+
affiliation: Stanford University
34+
venue: preprint
35+
year: 2025
36+
date: 2025-11-11
37+
has_pdf: true
38+
doi: 10.48550/arXiv.2511.07885
39+
tags:
40+
- machine learning
41+
- systems
42+
- hardware efficiency
43+
teaser: We introduce intelligence-per-watt (IPW) to measure how efficiently inference systems convert energy into useful computation. Local LMs accurately respond to 88.7% of single-turn chat and reasoning queries, with local intelligence efficiency improving 5.3x from 2023-2025.
44+
materials:
45+
- name: Paper
46+
url: https://arxiv.org/abs/2511.07885
47+
type: file-pdf
48+
- name: Codebase
49+
url: https://github.com/HazyResearch/intelligence-per-watt
50+
type: code
51+
- name: Blog post
52+
url: https://hazyresearch.stanford.edu/blog/2025-11-11-ipw
53+
type: link
54+
---
55+
Large language model (LLM) queries are predominantly processed by frontier models in centralized cloud infrastructure. Rapidly growing demand strains this paradigm, and cloud providers struggle to scale infrastructure at pace. Two advances enable us to rethink this paradigm: small LMs (<=20B active parameters) now achieve competitive performance to frontier models on many tasks, and local accelerators (e.g., Apple M4 Max) run these models at interactive latencies. This raises the question: can local inference viably redistribute demand from centralized infrastructure? Answering this requires measuring whether local LMs can accurately answer real-world queries and whether they can do so efficiently enough to be practical on power-constrained devices (i.e., laptops). We propose intelligence per watt (IPW), task accuracy divided by unit of power, as a metric for assessing capability and efficiency of local inference across model-accelerator pairs. We conduct a large-scale empirical study across 20+ state-of-the-art local LMs, 8 accelerators, and a representative subset of LLM traffic: 1M real-world single-turn chat and reasoning queries. For each query, we measure accuracy, energy, latency, and power. Our analysis reveals 3 findings. First, local LMs can accurately answer 88.7% of single-turn chat and reasoning queries with accuracy varying by domain. Second, from 2023-2025, IPW improved 5.3x and local query coverage rose from 23.2% to 71.3%. Third, local accelerators achieve at least 1.4x lower IPW than cloud accelerators running identical models, revealing significant headroom for optimization. These findings demonstrate that local inference can meaningfully redistribute demand from centralized infrastructure, with IPW serving as the critical metric for tracking this transition. We release our IPW profiling harness for systematic intelligence-per-watt benchmarking.
56+

imgs/teasers/ipw.png

279 KB
Loading

imgs/thumbs/ipw.png

844 KB
Loading

0 commit comments

Comments
 (0)