返回顶部
l

langsmith-cli

>

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.1.0
安全检测
已通过
194
下载量
1
收藏
概述
安装方式
版本历史

langsmith-cli

# LangSmith CLI Skill CLI: `scripts/langsmith.py`. Requires `LANGSMITH_API_KEY` in env (or `~/.zshrc`). No second API key needed — the `ask` command fetches and formats traces as structured context for **your agent** to analyze. No trace data is sent to any third-party LLM. ## Commands ### Tier 0 — Ask (agent Q&A over traces) ```bash python3 scripts/langsmith.py ask "<question>" --project <name> [--since 24h] [--limit 50] ``` Fetches recent runs and prints them as structured JSON context. Your agent reads the output and answers the question — no external LLM calls, no data leaving your machine beyond the LangSmith API. Examples: - `ask "why is my chain slow this week" --project my-project` - `ask "what do failing runs have in common" --project my-project --since 7d` - `ask "did the system prompt change on Friday affect output quality" --project my-project` ### Tier 1 — Situational Awareness ```bash python3 scripts/langsmith.py runs <project> [--since 2h] [--status error|success] [--limit 20] python3 scripts/langsmith.py cost <project> [--since 7d] # token spend by chain/node python3 scripts/langsmith.py latency <project> [--since 24h] # p50/p95/p99 per run name ``` ### Tier 2 — Before/After Comparisons ```bash python3 scripts/langsmith.py diff <project> --before <ISO_date> --after <ISO_date> python3 scripts/langsmith.py prompt-diff <run_id_a> <run_id_b> ``` `diff` compares avg latency, error rate, cost, output length across two time windows. `prompt-diff` shows side-by-side system prompts + outputs for two specific runs. ### Tier 3 — Deep Analysis (stubs, expand as needed) ```bash python3 scripts/langsmith.py cluster-failures <project> [--since 7d] python3 scripts/langsmith.py replay <run_id> ``` ## Auth Setup ```bash export LANGSMITH_API_KEY=<your-key> # or add to ~/.zshrc ``` Test with: `python3 scripts/langsmith.py runs <project> --limit 3` ## Security & Data Flow This skill makes outbound network requests only to **`api.smith.langchain.com`** (the LangSmith API). That's it. - **`LANGSMITH_API_KEY`** — sent as an HTTP header to `api.smith.langchain.com` only. Never logged or stored. - **Trace data** — fetched from LangSmith and printed to stdout for your agent to read. No trace data is sent to any third-party LLM or external service. - **No second API key required** — the `ask` command outputs structured trace context for your existing agent to analyze, rather than making its own LLM calls. - **No telemetry** — the script collects no usage data. The script is ~300 lines of pure Python with no obfuscation. Audit it at `scripts/langsmith.py`. ## API Reference See `references/langsmith-api.md` for endpoint details and run object schema.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 langsmith-cli-1776367922 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 langsmith-cli-1776367922 技能

通过命令行安装

skillhub install langsmith-cli-1776367922

下载 Zip 包

⬇ 下载 langsmith-cli v1.1.0

文件大小: 7.8 KB | 发布时间: 2026-4-17 15:12

v1.1.0 最新 2026-4-17 15:12
Removed internal LLM calls from ask command. Traces are now returned as structured context for the agent to analyze — no third-party LLM calls, no ANTHROPIC_API_KEY needed. Only outbound connection is to api.smith.langchain.com.

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部