← back

4Byte
Feb 9, 2026

Most software on the web is built to maximize extraction:

This project is not optimized for any of that.
It is optimized for reading.

The codebase currently ships as a Rust TUI browser (4byte, package forbyte) that treats the web page as a document first, and everything else as optional.

Design Constraints

The implementation is intentionally opinionated:

  1. Fetch the HTML document.
  2. Do not auto-fetch subresources.
  3. Render text.
  4. Let users explicitly request media.

That is not a missing feature set. That is the feature set.

net::fetch_html does a single document request.
parse::render_text_lines turns HTML into wrapped text via html2text.
parse::extract_images builds an inventory of <img> references without downloading them.

The default behavior has no autoplay pipeline, no third-party script execution, and no layout engine trying to impersonate a casino.

Architecture in Practice

The project is split into modules with narrow responsibilities:

This split keeps network behavior legible. If the app touched the network, there is exactly one place to read.

Network Model: Explicit Is Better Than "Helpful"

On load/reload:

Manual image download goes through app::download_selected, which applies domain policy first:

The code calls this "confirm." Marketing teams call this "friction." Both descriptions are correct.

Tracking Hygiene

Navigation paths strip common tracking params in privacy::strip_tracking_params:

Bookmarks and outgoing navigation paths are normalized through the same flow.
If a URL arrives bloated with campaign metadata, it leaves cleaner than it arrived.

No, this does not make fingerprinting disappear. It just refuses to volunteer extra telemetry.

State Machine, Not Feed Machine

app::run is a direct event loop over crossterm key events.
Screens are explicit (Page, Links, Toc, Bookmarks, Address, Find, Palette, Search).
Actions are explicit key bindings.

There is no ranking model deciding what you should read next. There is no recommendation rail pretending to be discovery while optimizing ad adjacency. There is no growth loop hidden behind "personalization."

The software waits for input, performs the requested operation, and reports status.
This used to be normal.

Offline and Export Paths

The project has two practical exits from the live web:

Caching (cache::fetch_html_cached, cache::try_get_cached_image) is TTL-based and local.
It exists to reduce redundant network calls, not to build a behavior profile.

Search Without Platform Gravity

Search is delegated to SearXNG via JSON (search::search) and only runs when requested.
No background query stream, no passive behavioral model, no "for you" tab.

You can self-host search infrastructure and keep that boundary under your own control.
That is slower than outsourcing trust to ad platforms. It is also the point.

Tradeoffs

What this project gives up:

What it gains:

The web is currently excellent at monetizing confusion, outrage, and shallow novelty.
This project is an attempt to build tools that are better at helping people read.

Not a new ad slot.
A different default.