If you need to capture a screenshot of a website programmatically, you have two primary options: spin up a headless browser with Puppeteer, or call a managed API like len.sh. Both get the job done — but for very different use cases, with very different tradeoffs.
This post breaks down the real differences so you can make the right choice for your project.
What is Puppeteer?
Puppeteer is an open-source Node.js library maintained by Google’s Chrome team. It gives you full programmatic control over a headless (or headful) Chromium browser. You can automate nearly anything a human user can do: click buttons, fill forms, intercept network requests, render JavaScript-heavy pages, and yes — take screenshots.
Puppeteer is powerful precisely because it is a full browser. That power comes with real infrastructure costs: you need to install Chrome/Chromium, manage browser processes, handle memory, and deal with the operational complexity that comes with running headless browsers at scale.
For teams doing complex browser automation — multi-step scraping workflows, end-to-end testing, or custom rendering pipelines — Puppeteer is often the right tool. It was designed for those workloads.
What is len.sh?
len.sh is a managed screenshot API. You send an HTTP request with a URL and some parameters; you get back a screenshot. No browser to install. No process to manage. No infrastructure to scale.
The API handles all the headless browser complexity behind the scenes — JavaScript rendering, full-page capture, custom viewports, multiple output formats — and exposes it through a simple REST interface. It’s built for teams who need reliable screenshot generation as a feature, not teams who want to operate headless browser infrastructure.
Setup Comparison
The fastest way to understand the difference is to look at what “get a screenshot” looks like in each approach.
Puppeteer
const puppeteer = require('puppeteer');
async function takeScreenshot(url) {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.setViewport({ width: 1280, height: 800 });
await page.goto(url, { waitUntil: 'networkidle2' });
const screenshot = await page.screenshot({
fullPage: true,
type: 'png',
});
await browser.close();
return screenshot;
}
Before this runs, you also need:
npm install puppeteer
# Puppeteer downloads a compatible Chromium binary (~170MB)
And if you’re deploying this to a server (especially a Lambda or container), you need to:
- Bundle Chromium or install system dependencies
- Set the right flags for sandboxing (
--no-sandboxin many environments) - Handle
SIGTERMand browser cleanup to avoid zombie processes
That’s before you’ve written a single line of your actual application logic.
len.sh
curl "https://api.len.sh/v1/screenshot?url=https://example.com&api_key=YOUR_API_KEY"
Or from JavaScript:
async function takeScreenshot(url) {
const response = await fetch(
`https://api.len.sh/v1/screenshot?url=${encodeURIComponent(url)}&api_key=${process.env.LENSH_API_KEY}`
);
return response.arrayBuffer(); // Raw image bytes
}
No npm install. No binary download. No browser processes. No cleanup. One HTTP request.
Feature Comparison
| Feature | Puppeteer | len.sh |
|---|---|---|
| Setup time | 15–60 minutes (+ devops) | < 5 minutes |
| Infrastructure required | Yes (server + Chrome) | No |
| Full-page screenshots | Yes | Yes |
| Output formats | PNG, JPEG, PDF, WebP | PNG, JPEG, WebP, PDF |
| Custom viewport | Yes | Yes |
| JavaScript rendering | Yes (full browser) | Yes |
| Wait strategies | Yes (networkidle, timeout, custom) | Yes (delay parameter) |
| Auth / cookies | Yes (full control) | No (public URLs only) |
| PDF export | Yes | Yes |
| Caching | Manual / none | Automatic |
| Horizontal scaling | Manual (complex) | Automatic |
| Rate limiting handling | You build it | Managed |
| Cost model | Infrastructure cost | Per-request pricing |
| Air-gapped / offline | Yes | No |
| Custom browser interactions | Yes | No |
Scaling Challenges with Puppeteer
Running Puppeteer in production is where many teams hit a wall. A single Chromium instance can use 200–500MB of RAM, and it only gets worse under concurrent load.
Memory and Crashes
Headless Chrome leaks memory over time. Long-running Puppeteer processes need periodic restarts. If you’re handling concurrent requests, each one wants its own browser instance (or at minimum its own page), and managing a browser pool requires significant engineering effort.
Concurrency
A naive Puppeteer implementation creates a new browser per request. Ten simultaneous screenshot requests means ten Chrome instances — potentially 2–3GB of RAM for a burst of traffic. Efficient pooling (using libraries like puppeteer-cluster) reduces this but adds architectural complexity.
Docker and Containers
Chromium has notoriously picky system dependencies. Getting it to run in a minimal Docker image requires installing specific shared libraries. Running it in AWS Lambda or Google Cloud Run requires bundling a compatible binary and setting --no-sandbox --disable-setuid-sandbox (with the security implications that follow).
Cold Starts
Spinning up a new Chromium process takes 1–3 seconds. In serverless environments, this cold start time compounds with the function invocation overhead. For high-frequency screenshot workloads, this latency is often unacceptable.
Operations Burden
None of this work is your core product. If you’re building a SaaS that generates preview images, you don’t want to spend engineering cycles managing browser process pools. You want to ship features.
When to Choose Puppeteer
Puppeteer is the right choice when:
- You need full browser automation beyond screenshots — form filling, login flows, multi-step scraping, intercepting network requests, or injecting JavaScript.
- You have complex wait conditions — waiting for specific DOM elements, network events, or custom JavaScript callbacks that a simple delay parameter can’t handle.
- You need to access authenticated content — pages behind login walls, cookies, or session state that requires browser-level session management.
- You’re building a scraping pipeline — where the screenshot is one step in a larger workflow that already involves browser automation.
- You have air-gapped or offline requirements — no external API calls allowed, everything must run on-premises.
- You want complete control — custom Chrome flags, extensions, CDP protocol access, or unconventional rendering pipelines.
If your use case fits any of these, Puppeteer (or Playwright, its more modern cousin) is likely the better fit despite the operational overhead.
When to Choose len.sh
len.sh is the right choice when:
- Screenshots are the end goal — you need website screenshots as a feature, not as part of a larger automation workflow.
- You want zero infrastructure overhead — no Chrome to install, no processes to manage, no server to provision.
- You’re working in any language or platform — Python, Ruby, PHP, Go, mobile apps — anything that can make an HTTP request works.
- You need reliable production behavior at scale — without building and maintaining browser pool management.
- You want fast integration — get from “idea” to “working screenshots in production” in under an hour.
- You care about consistency — a managed API provides a consistent rendering environment, while self-hosted Puppeteer setups can drift across environments.
The API supports the parameters that cover the majority of real-world screenshot needs:
https://api.len.sh/v1/screenshot
?url=https://example.com # Target URL (required)
&api_key=YOUR_API_KEY # Authentication (required)
&width=1280 # Viewport width in pixels
&height=800 # Viewport height in pixels
&format=png # png | jpeg | webp | pdf
&full_page=true # Capture full scrollable height
&delay=1000 # Wait ms before capture (for animations, etc.)
Code Comparison: Building an OG Image Generator
Open Graph (OG) images are a common use case — dynamically generated preview images for social sharing. Here’s what building an OG image endpoint looks like with each approach.
Puppeteer Version (~35 lines)
const puppeteer = require('puppeteer');
const express = require('express');
const app = express();
let browser;
async function getBrowser() {
if (!browser || !browser.isConnected()) {
browser = await puppeteer.launch({
args: ['--no-sandbox', '--disable-setuid-sandbox'],
});
}
return browser;
}
app.get('/og-image', async (req, res) => {
const { url } = req.query;
if (!url) return res.status(400).send('Missing url');
let page;
try {
const browser = await getBrowser();
page = await browser.newPage();
await page.setViewport({ width: 1200, height: 630 });
await page.goto(url, { waitUntil: 'networkidle2', timeout: 10000 });
const screenshot = await page.screenshot({ type: 'png' });
res.set('Content-Type', 'image/png');
res.send(screenshot);
} catch (err) {
res.status(500).send(err.message);
} finally {
if (page) await page.close();
}
});
app.listen(3000);
This doesn’t account for browser pool management, memory limits, crash recovery, or graceful shutdown. A production-ready version is significantly longer.
len.sh Version (~5 lines)
app.get('/og-image', async (req, res) => {
const { url } = req.query;
if (!url) return res.status(400).send('Missing url');
const apiUrl = `https://api.len.sh/v1/screenshot?url=${encodeURIComponent(url)}&width=1200&height=630&format=png&api_key=${process.env.LENSH_API_KEY}`;
const upstream = await fetch(apiUrl);
res.set('Content-Type', 'image/png');
upstream.body.pipeTo(new WritableStream({ write: chunk => res.write(chunk) }))
.then(() => res.end());
});
Same result. Dramatically less code. No browser process to manage.
Decision Framework
Not sure which to use? Work through these questions:
-
Do you need to interact with the page (click, scroll, fill forms, run custom JS)?
- Yes → Puppeteer
- No → continue
-
Does the page require authentication or session cookies?
- Yes → Puppeteer
- No → continue
-
Are you working in a non-Node.js environment?
- Yes → len.sh (Puppeteer is Node.js only)
- No → continue
-
Do you need to take more than a handful of screenshots per day?
- Yes → consider len.sh (scaling is handled for you)
- No → either works; Puppeteer is free at low volume
-
Is screenshot generation a side feature, not your core product?
- Yes → len.sh (don’t build infrastructure for non-core features)
- No → Puppeteer gives you more control
If you reach the end without a clear answer, start with len.sh. You can always switch if you hit a limitation — but you probably won’t.
Conclusion
Puppeteer is a genuinely powerful tool, and for complex browser automation it remains the gold standard. If you’re building end-to-end tests, scraping authenticated content, or orchestrating multi-step browser workflows, you should use it.
But if you need screenshots — just screenshots — managing your own headless browser infrastructure is a significant overhead that doesn’t serve your users. len.sh exists to handle that operational complexity so you can focus on building your product.
The right tool depends on what you’re building. For most web screenshot use cases, len.sh gets you to production faster with less code and none of the browser management headaches.
Ready to try it? You can get started with len.sh at len.sh — your first requests are free.