⚠ Dangerously Educational
← LIVE DASHBOARD

What The PyPI Place Is

The PyPI Place (PPP) is a very creatively named, live and continuously running watchdog that tests whether Python packages from The Python Package Index actually install and function across multiple environments.

PyPI is the repository of software for the Python programming language — it's how the community finds, installs, and shares code. Over 760,000 packages live there, added by anyone, with no quality floor by design.

PPP operates on a free-tier Oracle ARM instance. It pulls every new package release, attempts installs across Python 3.9, 3.11, and 3.12 on both slim Debian and Alpine Linux containers, and publishes the results openly in real time.

The project exists because — by design — PyPI has no quality floor.
3.9
python:3.9-slim
3.11
python:3.11-slim
3.12
python:3.12-slim

Each new release runs through a four-phase test: dependency resolution, no-dependency install, full install, and an import check. Every result is timestamped, structured, and published. Nothing is discarded.

The Dataset

Every test run generates structured, timestamped records of exactly which packages install successfully across which Python versions and Linux distributions. Over time this creates a longitudinal picture of the PyPI environment.

# example test result record { "package": "example-lib", "version": "2.1.0", "environment": "py312-slim", "result": "PASS", # or FAIL / TIMEOUT "phase_download": "PASS", "phase_nodeps": "PASS", "phase_full": "PASS", "phase_import": "PASS", "observed_at": 1714000000 }

The dataset surfaces several distinct failure categories:

Alpine failures

Packages that break on Alpine due to missing system libraries — a very common failure mode that surprises teams moving to slim container images in CI.

Version deprecation failures

Packages that fail silently on Python 3.12 or 3.13 due to deprecated APIs — often packages that claim compatibility they don't actually have.

Phantom successes

Packages that install but cannot be imported. The install log is green. The runtime is broken. This category alone surfaces packages that would waste hours of developer time.

For anyone managing dependencies, building Docker images, or trying to understand why a CI pipeline fails on one environment but not another — this cross-environment install matrix is the reference that doesn't exist anywhere else.

As the dataset grows it becomes possible to answer questions that have no good answer today: which packages have never successfully installed on Alpine? Which dependency chains are structurally broken across Python versions — with real empirical test results, not self-reported metadata?

Where This Comes From

PyPI launched in 2003, over 20 years ago. It was bare-bones for a long time — the modern infrastructure with proper APIs and verified downloads came much later, around 2018-2019 with the Warehouse rewrite.

Twenty years ago I was aware of PyPI, but only vaguely. I "worked with computers" — meaning I maintained a retail website for a living, with a total of about two years of university credits from four different colleges in three different states. I suppose I thought I had better things to do, or at least assumed I should have something better to do and better find it.

For two of those semesters I was majoring in computer science, which is just enough time to realize that although programming in C is fun, the way it was being taught was so divorced from any application in the real world that it was all going in one ear and out the other. I picked up a few concepts by rote and through osmosis — and more by regularly editing JavaScript files, copy-pasting SQL, wrangling PHP. Oh, and a lot of wasted time with ActionScript and flipping through C++ books about "making apps" that never actually showed how to make apps. No software developer, me.

But I certainly knew I'd be as capable as any recent grad at being put to work on a corporate project. I went to high school with many people paid six-figure salaries in their twenties to be "put on projects" — tasked with what amounted often to data entry. Many of them have millions of dollars now.

Did I mention I was getting paid eight dollars an hour under the table to run that retail website, run the marketing department, design the pages, and troubleshoot the backend? The "office" was down two very long sets of stairs in a subbasement stockroom under a salon in Soho, NYC. My head less than a hundred feet from the regularly rumbling Broadway-Lafayette platform. There was literal sewage running through gutters in the floor covered by steel grating. Rats did occasionally scurry over monitors. I also packed the orders, received deliveries, spoke to customers on the phone, resolved issues, advised them on products. The worst part was dragging two-thousand-pound pallets stacked six feet high over cobblestone.

Twenty years ago, if I had been cut a break and been allowed an entry-level position working on a project like The PyPI Place, the company doing the work would have needed a real budget.

What This Would Have Cost in 2006

Layer Then (2003–2006) Now
Infrastructure Rack unit rental, hosting contracts, T1 line negotiations Oracle Always Free ARM. $0.
Mesh networking Build on top of early Tor or I2P (primitive, slow) — or design your own overlay network, which is a PhD-level project Yggdrasil. Self-organizing. Free.
Environment isolation Separate physical machines or early VMware (expensive, finicky) — one server per Python version Docker. Ephemeral containers. Free.
Verification layer Cryptographic hashing existed but integrating it into an automated pipeline meant writing substantial C or Java Python hashlib. Three lines.
Coordination Every architectural decision is a meeting. Every integration problem is a ticket. Every config mistake: a debugging session lasting days. One person. One laptop. One free instance.
In 2006 this would have been a serious enterprise software contract: distributed package integrity monitoring with cryptographic verification, mesh network delivery, and automated anomaly detection across multiple Python environments. A team of 3–4 engineers. Six to nine months. Java or C++ everywhere. XML config files the size of phone books. A bill somewhere between $400,000 and $1.2 million depending on who was selling it.

And it still wouldn't have had the mesh part, because that infrastructure didn't exist yet.

The value of something like The PyPI Place hasn't changed. The cost of building it has collapsed to approximately nothing.

The Argument

One person with a cheap laptop and a free ARM instance in Oracle's cloud, testing the integrity of an ecosystem containing 760,000 packages — the absurdity of the scale is much of the point.

This is performance art: the act of doing it at all, with very little, is the critique. It says: the infrastructure for this has always existed. It was always possible. No institution or well-funded team ever bothered.

The dissemination is built into the architecture: the dashboard serves live results publicly at zero cost, the data is meant to be scraped and reused, and the explicit invitation is for anyone with a free-tier instance or an idle laptop to run their own node. The more people do it, the more robust the data.

The Writing Machine

Going forward, the plan is to increase weirdness. Results are published as feeds, read by synthetic presenters, streamed in the glitch-aesthetic terminal voice the project already speaks in — turning dry install logs into a kind of ongoing public performance about the state of the commons.

A 7B model — the smallest serious model, the one that fits on hardware poor people can actually afford, the one that runs without a GPU — sitting on the same free Oracle instance as the watchdog, watching the test results come in, and writing summaries in the form of play-by-play and color commentary. Not simply generating text: applying logic to a workflow that results in truthful statements. No human curator. Content generated as output of a complex and deliberate machine. No profit motive. A system built only to help and to entertain.

The writing machine is subject to the same constraints as everything else this project makes: it runs on what's free, it runs on what's small, it runs on what poor people can run.

And what it writes about is failure. Package after package, environment after environment, the silent accumulating record of things that don't work. The writing machine narrates the broken state of a commons that nobody was watching, in prose generated by the smallest model that can make prose, on infrastructure that costs nothing, for an audience that doesn't exist.

The medium is the message and the message is: this was always possible, the tools were always here, and a 7B model on a free ARM instance is now sufficient to create something that contributes in a unique way.

YggCrawl: The People's Internet Layer

YggCrawl broadcasts the dataset out over a parallel network that the commercial internet doesn't touch.

Yggdrasil is a self-organizing encrypted IPv6 mesh that exists outside the DNS/CDN/corporate routing layer entirely. It runs over ordinary IPv4 transport but operates as an autonomous network — no registrar, no CDN, no billing dispute can take it down.

The dashboard at sovereignmail.org/pypiplace is the public face.
YggCrawl is the version that exists in the commons that can't be taken down.

The PyPI Place results flowing over Yggdrasil means the data is accessible to people already running their own mesh nodes, already operating outside the default topology. That's the People's Internet argument made concrete at the network layer.

● ARTIFACT-FIRST DESIGN

The system does not care if a process is running. It cares whether current.json and current.json.sha256 exist. Everything else is just a means to produce those artifacts.

● DETERMINISTIC CONVERGENCE

Nodes fetch snapshots from peers, verify hash, validate schema, merge deterministically. No trust required. No central coordinator.

● OPEN REPLICATION

Anyone with a free-tier instance can run their own node. The more nodes, the more robust the data. The invitation is explicit and architectural.

# snapshot served at your Yggdrasil IPv6 address $ curl http://[200:xxxx:...]:8080/current.json # verify $ curl http://[200:xxxx:...]:8080/current.json.sha256 # compare $ sha256sum current.json

How It Works

PyPI RSS feedwatchdog/rss_poller.py — fetch new releases ↓ watchdog/batch_runner.py — install test × 3 environments → SQLite ↓ watchdog/batch_runner.py — emit result records → outbox/ ↓ yggcrawl/ingest.py — validate + merge into node index ↓ yggcrawl/main.py — generate snapshot + hash + diff ↓ yggcrawl/network.py — serve over Yggdrasil IPv6 HTTP ↓ peers — fetch / verify / merge / repeat

Four-phase testing

Every package runs four phases per environment, with each phase gating the next:

phase_download pip download --no-deps (resolution only — does PyPI even have it?) phase_nodeps pip install --no-deps (package integrity, no dep graph) phase_full pip install (full dependency resolution) phase_import python -c "import X" (runtime viability — does it actually work?)

Infrastructure

ORACLE ARM A1

4 OCPU / 24 GB RAM. Watchdog, Docker test runner, writing machine, broadcaster. Always Free tier. $0/month.

ORACLE x86 MICRO

1 OCPU / 1 GB RAM. nginx, static site, RSS feed, YggCrawl node. Always Free tier. $0/month.

YGGDRASIL MESH

Self-organizing IPv6 overlay. Snapshot broadcasting. No hosting contract. No CDN. No single point of failure.

Scheduling

# systemd timers — no cron watchdog *-*-* 00,06,12,18:00:00 every 6 hours writer *-*-* 20:00:00 after watchdog completes broadcaster *-*-* 22:30:00 renders for 11pm site *-*-* 23:15:00 publish episode + RSS
Single SQLite database. All components read and write through it. No external APIs at runtime. No cloud services beyond free-tier compute. Everything runs on what's free, what's small, what poor people can run.