ClawFeed: AI-Powered Multi-Source News Aggregator — Stop Scrolling, Start Knowing
Spending hours scrolling Twitter and RSS feeds, afraid of missing hot topics, only to be drowned in noise and growing more anxious. ClawFeed solves exactly this — using AI to automatically filter content from Twitter, RSS, HackerNews, Reddit, GitHub Trending, and more, generating structured summaries. It supports four digest frequencies: 4-hourly briefs, daily highlights, weekly reviews, and monthly summaries. The Source Packs feature lets users bundle and share curated source sets with the community. Interesting items can trigger AI-powered deep analysis (Mark & Deep Dive) that goes beyond summaries into real insight. Every user's digest auto-generates RSS/JSON Feed subscriptions, enabling "human curation + AI processing" information redistribution. Clean tech stack: Node.js backend + SQLite zero-config storage + SPA frontend with English/Chinese UI and dark mode. One-click install via ClawHub, runs as an OpenClaw or Zylos skill, or deploys standalone. Google OAuth enables multi-user management of sources and bookmarks.
Background
Spending hours daily scrolling Twitter and RSS feeds, terrified of missing hot topics, only to be drowned in noise — the more you scroll, the more anxious you get. ClawFeed's core philosophy is "Stop scrolling. Start knowing." — using AI to filter signal from noise across thousands of information sources.
Core
Features #
Multi-Frequency Digests
Four automatic summary granularities: 4-hourly briefs, daily highlights, weekly reviews, monthly summaries. Each frequency generated independently, subscribe as needed. #
Rich Source
Support - **Twitter/X**: Follow specific users (@karpathy) or Twitter lists - **RSS/Atom**: Any RSS feed - **HackerNews**: HN front page - **Reddit**: Specific subreddits (e.g., /r/MachineLearning) - **GitHub Trending**: Filter by programming language - **Website scraping**: Any web page - **Custom API**: JSON endpoints - **Digest Feed**: Subscribe to other ClawFeed users' digests for meta-curation #
Source Packs
Bundle curated source sets into Packs for one-click community installation. Think npm packages for information curation — interested in AI frontiers? Install someone's AI Source Pack. #
Mark
& Deep Dive Bookmark interesting items while browsing. Bookmarked items trigger AI deep analysis generating detailed reports beyond summaries — real depth, not just overviews. #
Feed Output
Every user's digest auto-generates three subscription formats: - HTML page (`/feed/:slug`) - JSON Feed (`/feed/:slug.json`) - RSS (`/feed/:slug.rss`) Your AI-curated results become subscribable, enabling chain propagation of information. #
Smart Curation
Configure filtering via `templates/curation-rules.md` and customize AI output format via `templates/digest-prompt.md`.
Technical
Architecture - **Backend**: Node.js, default port 8767 - **Database**: SQLite (zero-config, portable) - **Auth**: Google OAuth 2.0 (multi-user; read-only without OAuth) - **Frontend**: SPA with English/Chinese UI, Dark/Light theme toggle - **API**: Full RESTful — digests, auth, bookmarks, sources, Source Packs, Feed output, changelog, roadmap
Deployment
ClawHub one-click install (`clawhub install clawfeed`), OpenClaw/Zylos skill, or standalone (`git clone → npm install → npm start`). Supports Caddy reverse proxy with path prefix.
Use
Cases - Personal information aggregation replacing anxious Twitter/HN/Reddit scrolling - Team knowledge sharing via Source Packs and Feed output - AI Agent integration as OpenClaw/Zylos skill - Information redistribution through curated Feed subscriptions **License**: MIT | **Author**: Kevin He | **Live Demo**: https://clawfeed.kevinhe.io