返回顶部
m

moltbook-digest

Collect Moltbook posts and comments, build an evidence pack, and interpret it through either the calling agent or LiteLLM.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 0.1.2
安全检测
已通过
148
下载量
0
收藏
概述
安装方式
版本历史

moltbook-digest

# Moltbook Digest Use this skill when the user wants more than a scrape. The goal is to turn Moltbook discussions into a usable evidence pack and then into a clear report. ## When To Use It - query-driven research on a specific Moltbook topic - feed digest for `hot`, `new`, `top`, or `rising` - repeated monitoring for one submolt or topic - agent-written report from a collected evidence pack ## Core Rule Prefer Moltbook's public API over browser scraping: 1. collect candidate posts 2. expand the strongest posts and comments 3. interpret the evidence Do not claim exhaustive coverage unless the actual sample supports it. ## Setup Install dependencies: ```bash uv sync --project "{baseDir}" ``` Before any interpreted run, create a user-specific config: ```bash cp "{baseDir}/config.example.yaml" "{baseDir}/config.yaml" ``` Then customize `config.yaml`: - replace `analysis.default_language: "__USER_PREFERRED_LANGUAGE__"` - keep or change `active_provider` - adjust `analysis.question_template`, `analysis.contract_template`, and `analysis.report_structure` if needed - fill provider keys only when using an external provider Do not run interpretation directly from `config.example.yaml`. Do not ask the agent to write or reveal API keys. ## Short Commands Collection only: ```bash uv run --project "{baseDir}" python "{baseDir}/scripts/moltbook_digest.py" \ --query "agent memory architecture" \ --query "agent memory failures and tradeoffs" \ --analysis-mode none ``` Feed digest: ```bash uv run --project "{baseDir}" python "{baseDir}/scripts/moltbook_digest.py" \ --collection-mode feed \ --feed-sort hot \ --max-posts 5 \ --comment-limit 6 \ --analysis-mode none ``` Continuous tracking: ```bash uv run --project "{baseDir}" python "{baseDir}/scripts/moltbook_digest.py" \ --collection-mode feed \ --feed-sort rising \ --submolt agents \ --history-dir output/moltbook-digest/history \ --analysis-mode none ``` ## Interpretation Paths ### Agent Use this when the current agent should write the final report. ```bash uv run --project "{baseDir}" python "{baseDir}/scripts/moltbook_digest.py" \ --query "agent memory governance" \ --analysis-mode auto \ --llm-config "{baseDir}/config.yaml" ``` What the script writes: - `digest.md` - `evidence.json` - `analysis_input.md` - `agent_handoff.md` What the calling agent must do next: 1. read `agent_handoff.md` first 2. read `analysis_input.md` 3. write the final report to `analysis_report.md` Do not draft the report from `digest.md` alone. ### LiteLLM Use this when the script should call an external provider. ```bash uv run --project "{baseDir}" python "{baseDir}/scripts/moltbook_digest.py" \ --query "long-running agent memory patterns" \ --analysis-mode auto \ --llm-config "{baseDir}/config.yaml" ``` What the script writes: - `digest.md` - `evidence.json` - `analysis_input.md` - `analysis_report.md` ## Output Contract This test build uses fixed filenames: - `digest.md` - `evidence.json` - `analysis_input.md` - `agent_handoff.md` - `analysis_report.md` In `agent` mode, `analysis_report.md` is not auto-generated by the script. It is the expected output path for the calling agent. ## Guidance For The Calling Agent If the user is vague: - ask what question the report should answer - ask whether the goal is breadth, depth, or recency - ask whether a specific submolt matters If the user does not answer: - make one reasonable assumption - state it clearly in the report ## Notes - `references/api.md` contains endpoint notes and query guidance - search and expansion are fault-tolerant - non-fatal issues are recorded in `evidence.json`

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 moltbook-digest-1776127682 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 moltbook-digest-1776127682 技能

通过命令行安装

skillhub install moltbook-digest-1776127682

下载 Zip 包

⬇ 下载 moltbook-digest v0.1.2

文件大小: 33.14 KB | 发布时间: 2026-4-17 15:26

v0.1.2 最新 2026-4-17 15:26
moltbook-digest 0.1.2

- Refactored documentation for clarity and conciseness, emphasizing evidence pack collection and reporting through agent or LiteLLM paths.
- Removed the sample OpenAI agent configuration file (agents/openai.yaml).
- Updated setup and config guidance to clarify provider use and key management.
- Described output contract with fixed filenames and clearer handoff expectations for agent and LiteLLM modes.
- Added guidance for handling vague user requests and documenting assumptions in the report.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部