返回顶部
c

code-cache

Semantic code caching for AI agents. Cache, retrieve, and reuse code from prior agent executions.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.0.0
安全检测
已通过
677
下载量
0
收藏
概述
安装方式
版本历史

code-cache

# Code Cache - Semantic Code Caching for AI Agents This skill enables semantic code caching via the [Raysurfer](https://raysurfer.com) API. ## What It Does When your agent generates and executes code, Code Cache stores it. When a similar task comes up later, the agent can retrieve and run the cached code instead of regenerating it—saving time and tokens. ## Setup Get your API key from the [Raysurfer dashboard](https://raysurfer.com/dashboard/api-keys) and configure it: ```bash # Via environment variable export RAYSURFER_API_KEY=your_api_key_here # Or via OpenClaw config openclaw config set skills.entries.code-cache.apiKey "your_api_key_here" ``` ## Available Commands ### Search for cached code ``` /code-cache search <task description> [--top-k N] [--min-score FLOAT] [--show-code] ``` Search for cached code snippets that match a natural language task description. **Options:** - `--top-k N` — Maximum number of results (default: 5) - `--min-score FLOAT` — Minimum verdict score filter (default: 0.3) - `--show-code` — Display the source code of the top match **Example:** ``` /code-cache search "Generate a quarterly revenue report" /code-cache search "Fetch GitHub trending repos" --top-k 3 --show-code ``` ### Get code files for a task ``` /code-cache files <task description> [--top-k N] [--cache-dir DIR] ``` Retrieve code files ready for execution, with a pre-formatted prompt addition for your LLM. **Options:** - `--top-k N` — Maximum number of files (default: 5) - `--cache-dir DIR` — Output directory (default: `.code_cache`) **Example:** ``` /code-cache files "Fetch GitHub trending repos" /code-cache files "Build a chart" --cache-dir ./cached_code ``` ### Upload code to cache ``` /code-cache upload <task> --files <path> [<path>...] [--failed] [--no-auto-vote] ``` Upload code from an execution to the cache for future reuse. **Options:** - `--files, -f` — Files to upload (required, can specify multiple) - `--failed` — Mark the execution as failed (default: succeeded) - `--no-auto-vote` — Disable automatic voting on stored code blocks **Example:** ``` /code-cache upload "Build a chart" --files chart.py /code-cache upload "Data pipeline" -f extract.py transform.py load.py /code-cache upload "Failed attempt" --files broken.py --failed ``` ### Vote on cached code ``` /code-cache vote <code_block_id> [--up|--down] [--task TEXT] [--name TEXT] [--description TEXT] ``` Vote on whether cached code was useful. This improves retrieval quality over time. **Options:** - `--up` — Upvote / thumbs up (default) - `--down` — Downvote / thumbs down - `--task` — Original task description (optional) - `--name` — Code block name (optional) - `--description` — Code block description (optional) **Example:** ``` /code-cache vote abc123 --up /code-cache vote xyz789 --down --task "Generate report" ``` ## How It Works 1. **Cache Hit**: When you ask for code similar to something previously executed, Code Cache returns the cached version instantly 2. **Cache Miss**: When no match exists, your agent generates code normally, then Code Cache stores it for future use 3. **Verdict Scoring**: Code that works gets 👍, code that fails gets 👎—retrieval improves over time ## API Reference The skill wraps these Raysurfer API methods: | Method | Description | |--------|-------------| | `search(task, top_k, min_verdict_score)` | Unified search for cached code snippets | | `get_code_files(task, top_k, cache_dir)` | Get code files ready for sandbox execution | | `upload_new_code_snips(task, files_written, succeeded, auto_vote)` | Store new code after execution | | `vote_code_snip(task, code_block_id, code_block_name, code_block_description, succeeded)` | Vote on snippet usefulness | ## Why Code Caching? LLM agents repeat the same patterns constantly. Instead of regenerating code every time: - **30x faster**: Retrieve proven code instead of waiting for generation - **Lower costs**: Reduce token usage by reusing cached solutions - **Higher quality**: Cached code has been validated and voted on - **Consistent output**: Same task = same proven solution Learn more at [raysurfer.com](https://raysurfer.com) or read the [documentation](https://docs.raysurfer.com).

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 code-cache-1776419979 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 code-cache-1776419979 技能

通过命令行安装

skillhub install code-cache-1776419979

下载 Zip 包

⬇ 下载 code-cache v1.0.0

文件大小: 10.67 KB | 发布时间: 2026-4-17 20:18

v1.0.0 最新 2026-4-17 20:18
Initial release: semantic code caching for AI agents via Raysurfer API

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部