返回顶部
s

stella-selfie

Generate persona-consistent selfie images and send to any OpenClaw channel. Supports Gemini, fal, and laozhang.ai providers, multi-reference avatar blending.

作者: admin | 来源: ClawHub
源自
ClawHub
版本
V 1.3.3
安全检测
已通过
289
下载量
0
收藏
概述
安装方式
版本历史

stella-selfie

# Stella Selfie Generate persona-consistent selfie images using Google Gemini or fal (xAI Grok Imagine) and send them to messaging channels via OpenClaw. Supports multi-reference avatar blending for strong character consistency. ## When to Use - User says "send a pic", "send me a photo", "send a selfie", "发张照片", "发自拍" - User says "show me what you look like...", "send a pic of you...", "展示你在..." - User describes a scene: "send a pic wearing...", "send a pic at...", "穿着...发张图" - User wants the agent to appear in a specific outfit, location, or situation ## Prompt Modes ### Mode 1: Mirror Selfie (default) Best for: outfit showcases, full-body shots, fashion content ``` A mirror selfie of this person, [user's context], showing full body reflection. ``` ### Mode 2: Direct Selfie Best for: close-up portraits, location shots, emotional expressions ``` A selfie of this person, [user's context], looking into the lens. ``` ### Mode 3: Third-Person Photo Best for: non-selfie viewpoints, including explicit third-person requests and scenes that should not read as a selfie ``` A natural third-person photo of this person, [user's context], natural composition, not a selfie. ``` ### Mode Selection Logic | Signal | Auto-Select Mode | | ------ | ---------------- | | Strong user keywords: outfit, wearing, clothes, dress, suit, fashion | `mirror` | | Strong user keywords: full-body, mirror, reflection, pose, show the look | `mirror` | | Strong user keywords: selfie, close-up, portrait, face, eyes, smile, looking into the lens | `direct` | | Strong user keywords: third-person, not a selfie, candid shot, 他拍, 路拍, 抓拍 | `third_person` | | Legacy keywords: travel photo, tourist photo, 旅拍, 打卡照, 风景合影 | `third_person` | Default policy: - Interpret explicit user requirements first: camera style, outfit emphasis, body framing, scene, pose, and expression. - Use `mirror` by default for outfit / full-body / self-presentation requests, even if the user did not explicitly mention a mirror. - Use `direct` by default for selfie requests focused on face, emotion, immediacy, or in-the-moment presence. - Use `third_person` only when the user explicitly asks for a non-selfie style or clearly describes a shot that should not read as a selfie. Default mode when no keywords match and timeline is unavailable: `mirror` ## Resolution Keywords | User says | Resolution | | ----------------------------------- | ---------- | | (default) | `1K` | | 2k, 2048, medium res, 中等分辨率 | `2K` | | 4k, high res, ultra, 超清, 高分辨率 | `4K` | ## Step-by-Step Instructions ### Step 1: Collect User Input Determine from the user's message: - **Explicit context** (optional): scene, outfit, location, activity — detect from keywords - **Mode** (optional): `mirror`, `direct`, or `third_person` — auto-detect from explicit user intent if not specified - **Target channel**: Where to send (e.g., `#general`, `@username`, channel ID) - **Channel provider** (optional): Which platform (discord, telegram, whatsapp, slack) - **Resolution** (optional): 1K / 2K / 4K — default 1K - **Count** (optional): How many images — default 1, only increase if explicitly requested - **Has explicit scene?**: Does the request contain any specific scene/outfit/location/activity keywords? ### Step 2: Enrich with Timeline Context Or Recent Scene Recall `timeline_resolve` is an optional enhancement, not a prerequisite. - If `timeline_resolve` is unavailable in the current environment, skip this step and proceed with Stella's default behavior. - If the request is a current-state `Sparse` prompt — for example "发张自拍", "发张照片", "想看看你", "send a selfie", "send a photo", "show me what you look like" — and `timeline_resolve` is available, load and follow `references/timeline-integration.md`. - If the current request clearly refers back to a single recently resolved timeline scene in the current conversation, load and follow `references/timeline-integration.md` even if the photo request itself is not Sparse. - If the user already provided a clear standalone scene, outfit, location, activity, or camera requirement and it is not a callback to a recently resolved timeline scene, do not use timeline enhancement. Follow the default policy directly. - When you do call `timeline_resolve`, do not freely rewrite the request into output-slot questions. Use the fixed query rules in `references/timeline-integration.md`. - Only enable Nano Banana real-world grounding when the prompt can explicitly include a concrete `city` plus an exact local date/time anchor from timeline data. If those anchors are missing, do not claim real-world synchronization. - If timeline returns `fact.status === "empty"`, is missing `result.consumption`, or any error occurs, immediately fall back to Step 3 without mentioning timeline failure to the user. **Never block image generation on timeline availability.** Timeline enrichment is best-effort and should only be used for current-state Sparse prompts or explicit callbacks to a recently resolved timeline scene. ### Step 3: Assemble Prompt Select mode from the default policy first. If the request is Sparse, and you loaded `references/timeline-integration.md` and obtained usable timeline context, apply its Sparse-only merge and prompt rules. When that timeline enrichment includes outdoor real-world grounding, keep the grounding clause as a separate strong instruction sentence rather than a soft atmosphere phrase like `Make it feel like...`. Otherwise, use the user's explicit context directly and keep Stella's original fallback behavior: ``` [mirror] A mirror selfie of this person, [user's explicit context if any], showing full body reflection. [direct] A selfie of this person, [user's explicit context if any], looking into the lens. [third_person] A natural third-person photo of this person, [user's explicit context if any], natural composition, not a selfie. ``` ### Step 4: Generate Image Run the Stella script: ```bash node {baseDir}/dist/scripts/skill.js \ --prompt "<ASSEMBLED_PROMPT>" \ --target "<TARGET_CHANNEL>" \ --channel "<CHANNEL_PROVIDER>" \ --caption "<CAPTION_TEXT>" \ --resolution "<1K|2K|4K>" \ --count <NUMBER> ``` ### Step 5: Confirm Result After the script completes, confirm to the user: - Image was generated successfully - Image was sent to the target channel - If any error occurred, send a concise actionable failure message ## Environment Variables Stella supports multiple providers and a gateway-backed send path, so its sensitive runtime environment variables are explicitly declared in `metadata.openclaw.requires.env` for OpenClaw's env-injection allowlist. The skill also sets `metadata.openclaw.always: true`, so these declarations do not become hard load-time gates. Actual credential validation remains runtime-driven inside `skill.js`, based on the selected provider. | Variable | Required | Description | | -------------------- | ------------------------------- | ------------------------------------------------------------------------------------ | | `GEMINI_API_KEY` | Required (if Provider=gemini) | Google Gemini API key | | `FAL_KEY` | Required (if Provider=fal) | fal.ai API key | | `LAOZHANG_API_KEY` | Required (if Provider=laozhang) | laozhang.ai API key (`sk-xxx`); get it at [api.laozhang.ai](https://api.laozhang.ai) | | `Provider` | Optional | Image provider: `gemini`, `fal`, or `laozhang` | | `AvatarBlendEnabled` | Optional | Enable or disable multi-reference avatar blending | | `AvatarMaxRefs` | Optional | Maximum number of reference images to blend | Credential requirements are provider-specific: - Default `Provider=gemini`: requires `GEMINI_API_KEY` - `Provider=fal`: requires `FAL_KEY` - `Provider=laozhang`: requires `LAOZHANG_API_KEY` ## Media File Handling (Gemini) When `Provider=gemini`, Stella writes generated files to: - `~/.openclaw/workspace/stella-selfie/` After successful send, Stella deletes the local file immediately. If send fails, the file is kept for debugging. ## Skill Environment Options Configure in your OpenClaw `openclaw.json` under `skills.entries.stella-selfie.env`: | Option | Default | Description | | -------------------- | -------- | ---------------------------------------------- | | `Provider` | `gemini` | Image provider: `gemini`, `fal`, or `laozhang` | | `AvatarBlendEnabled` | `true` | Enable multi-reference avatar blending | | `AvatarMaxRefs` | `3` | Maximum number of reference images to blend | > **Note for `Provider=fal` users**: fal's image editing API only accepts HTTP/HTTPS image URLs. Local file paths (from `Avatar` / `AvatarsDir`) are not supported. Configure `AvatarsURLs` in `IDENTITY.md` with public URLs of your reference images to enable image editing with fal. > **Note for `Provider=laozhang` users**: laozhang.ai uses the Google-native Gemini API format (`gemini-3-pro-image-preview`). It requires local reference images from `Avatar` / `AvatarsDir` and does not use `AvatarsURLs`. Supports 1K/2K/4K resolution and 10 aspect ratios. Get your API key at [api.laozhang.ai](https://api.laozhang.ai) — remember to configure a billing mode in the token settings before use. ## Delivery Path - Stella sends via `openclaw message send`. - Delivery auth and routing are handled by the local OpenClaw installation, not by skill-level gateway tokens. ## External Endpoints And Data Flow | Endpoint / path | When used | Data sent | | ----------------------------------- | ------------------- | ------------------------------------------------------------------------------------ | | Google Gemini API | `Provider=gemini` | Prompt text and selected local reference images from `Avatar` / `AvatarsDir` | | fal API | `Provider=fal` | Prompt text and public reference image URLs from `AvatarsURLs` | | laozhang.ai API (`api.laozhang.ai`) | `Provider=laozhang` | Prompt text and local reference images (`Avatar` / `AvatarsDir`, uploaded as base64) | | Local OpenClaw CLI | Always for delivery | Target channel, target id, caption text, and generated media path/URL | ## Security And Privacy - Stella reads `~/.openclaw/workspace/IDENTITY.md` and local avatar files to build reference context. - Under `Provider=gemini`, selected local avatar images are uploaded to Gemini as part of normal image generation. - Under `Provider=fal`, only public `http/https` avatar URLs are sent; local avatar files are not uploaded to fal directly. - Under `Provider=laozhang`, local avatar files from `Avatar` / `AvatarsDir` are base64-encoded and uploaded to laozhang.ai. - Generated files (Gemini and laozhang) are written to `~/.openclaw/workspace/stella-selfie/` and deleted after successful send. ## User Configuration Before using this skill, you must configure your OpenClaw workspace. See `templates/SOUL.fragment.md` for the recommended capability snippet to add to your `SOUL.md`. ### Required: IDENTITY.md Add the following fields to `~/.openclaw/workspace/IDENTITY.md`: ```markdown Avatar: ./assets/avatar-main.png AvatarsDir: ./avatars AvatarsURLs: https://cdn.example.com/ref1.jpg, https://cdn.example.com/ref2.jpg ``` - `Avatar`: Path to your primary reference image (relative to workspace root) - `AvatarsDir`: Directory containing multiple reference photos of the same character (different styles, scenes, outfits) - `AvatarsURLs`: Comma-separated public URLs of reference images — required for `Provider=fal` (local files are not supported by fal's API) ### Required: avatars/ Directory Place your reference photos in `~/.openclaw/workspace/avatars/`: - Use `jpg`, `jpeg`, `png`, or `webp` format - All photos should be of the same character - Different styles, scenes, outfits, and expressions work best - Images are selected by creation time (newest first) ### Required: SOUL.md Add the Stella capability block to `~/.openclaw/workspace/SOUL.md`. See README.md ("4. SOUL.md") for the copy/paste snippet. ## Installation ```bash clawhub install stella-selfie ``` After installation, complete the configuration steps above before using the skill.

标签

skill ai

通过对话安装

该技能支持在以下平台通过对话安装:

OpenClaw WorkBuddy QClaw Kimi Claude

方式一:安装 SkillHub 和技能

帮我安装 SkillHub 和 stella-selfie-1776209584 技能

方式二:设置 SkillHub 为优先技能安装源

设置 SkillHub 为我的优先技能安装源,然后帮我安装 stella-selfie-1776209584 技能

通过命令行安装

skillhub install stella-selfie-1776209584

下载 Zip 包

⬇ 下载 stella-selfie v1.3.3

文件大小: 27.23 KB | 发布时间: 2026-4-17 16:14

v1.3.3 最新 2026-4-17 16:14
v1.3.3\n\n- feat: Implement target directory write checks in sync-local-openclaw script to ensure proper permissions before syncing\n- chore: Bump version to 1.3.2 in package.json and update documentation to emphasize stronger grounding syntax for outdoor scenes\n- chore: Bump version to 1.3.1 in package.json\n- chore: Add sync-local-openclaw script to package.json and .clawhubignore, and improve README_CN formatting for third-person photo section\n- docs: Update README and README_CN to replace "tourist photo" with "third-person photo" for consistency, and refine context completion details for sparse requests with stella-timeline-plugin integration\n- docs: Update README and README_CN to replace "travel photo" with "third-person photo" for consistency, and enhance clarity on context completion for sparse photo requests with stella-timeline-plugin integration\n- docs: Update README and README_CN to improve formatting of selfie modes and enhance clarity on reference image examples\n- docs: Update README and README_CN to clarify selfie modes and enhance character consistency with new reference image examples\n- chore: Bump version to 1.3.0 in package.json and update documentation links in README and README_CN to reflect new references structure\n- docs: Clarify user intent requirements for selfie requests in README and README_CN, enhancing context completion details with stella-timeline-plugin integration\n- docs: Enhance README and README_CN to clarify integration with stella-timeline-plugin, detailing context completion and scene continuity features for improved user experience\n- docs: Update README and SKILL.md to clarify optional timeline enrichment rules and link to detailed integration documentation\n- docs: Update README_CN.md to correct formatting of avatar configuration fields for improved readability and consistency\n- docs: Update README_CN.md to improve clarity on API key references for providers, enhancing user understanding of integration options\n- chore: Bump version to 1.2.5 in package.json and update README.md for improved clarity on provider configurations and reference image setup\n- docs: Enhance README_CN.md with detailed guidance on reference image setup and integration with stella-timeline-plugin for improved character consistency and context-aware selfies\n- chore: Refine SKILL.md to clarify usage of atmosphere hints and continuity in mode selection, enhancing prompt generation logic\n- chore: Update README_CN.md for clarity on avatar blending and provider configurations, remove protocol.md, and enhance error handling in skill.ts with new test cases\n- chore: Bump version to 1.2.4 in package.json and update documentation for laozhang.ai provider to clarify usage of local reference images\n- chore: Update .env.example to remove OPENCLAW_GATEWAY_TOKEN, reflecting recent changes in API integration and documentation

Archiver·手机版·闲社网·闲社论坛·羊毛社区· 多链控股集团有限公司 · 苏ICP备2025199260号-1

Powered by Discuz! X5.0   © 2024-2025 闲社网·线报更新论坛·羊毛分享社区·http://xianshe.com

p2p_official_large
返回顶部