gemini-deep-research
# Gemini Deep Research → Notion
## Execution Mode
**Run ALL steps in the MAIN SESSION. Do NOT spawn a subagent.**
The browser tool (OpenClaw managed profile) is only available in the main session.
Subagents cannot access the browser, so all browser automation must happen here.
Reply first: "🔬 Deep Research starting for: [topic]. This takes ~25 min. I'll update you when done."
Then execute all phases below sequentially.
---
## Instructions
Complete ALL steps below in the main session.
### Phase 1: Trigger Deep Research
1. `browser action=open profile=openclaw targetUrl="https://gemini.google.com/app"`
2. Snapshot, find the text input, type the research query. **Always prepend "请用中文回答。" to the query** so the research output is in Chinese.
3. Click **"工具" (Tools)** button (has `page_info` icon) → click **"Deep Research"** in the menu
4. Click **Send** to submit the query
5. Wait for research plan to appear (~10s), then click **"Start research"** / **"开始研究"** button
- If snapshot-click doesn't work, use JS: `(() => { var btn = Array.from(document.querySelectorAll('button')).find(b => /Start research|开始研究/.test(b.textContent.trim())); if (btn) { btn.click(); return 'clicked'; } return 'not found'; })()`
6. Verify research started: button should be disabled, status shows "Researching X websites..." or "正在研究..."
7. Save the conversation URL from the browser
### Phase 2: Wait for Completion
1. Run `exec("sleep 1200")` (20 minutes) + `process(poll, timeout=1200000)`
2. After waking, check status via JS: `(() => { var el = document.querySelectorAll('message-content')[1]; return el ? el.innerText.substring(0, 200) : 'NOT_FOUND'; })()`
3. Look for completion signals: "I've completed your research" or "已完成"
4. If still running, sleep another 600s and check again (max 2 retries)
5. If failed/stuck after retries, announce the failure and exit
### Phase 3: Extract Report
1. Count message-content elements: `document.querySelectorAll('message-content').length`
2. The research report is in the LAST `message-content` element (usually index 2)
3. Get total length: `document.querySelectorAll('message-content')[2]?.innerText?.length`
4. Extract in 8000-char chunks using substring: `document.querySelectorAll('message-content')[N]?.innerText?.substring(START, END)`
5. Concatenate all chunks into the full report text
6. Save to a temp file: write full report to `/tmp/deep_research_<timestamp>.md`
### Phase 4: Export to Notion
**Parent page ID:** `31a4cfb5-c92b-809f-9d8a-dd451718a017` (Deep Research Database)
1. Read the Notion API key: `cat ~/.config/notion/api_key`
2. Parse the report into Notion blocks:
- Lines starting with `#` → heading_2/heading_3 blocks
- Bullet points → bulleted_list_item blocks
- Regular text → paragraph blocks
- Add a callout at top: "🔬 Generated by Gemini Deep Research on YYYY-MM-DD"
- Split rich_text at 2000 chars
3. Create the page via Notion API:
```bash
curl -s -X POST "https://api.notion.com/v1/pages" \
-H "Authorization: Bearer $NOTION_KEY" \
-H "Notion-Version: 2025-09-03" \
-H "Content-Type: application/json" \
-d '{"parent":{"page_id":"31a4cfb5-c92b-809f-9d8a-dd451718a017"},"icon":{"type":"emoji","emoji":"🔬"},"properties":{"title":{"title":[{"text":{"content":"TOPIC"}}]}},"children":[BLOCKS]}'
```
4. If >100 blocks, append remaining via PATCH to `/v1/blocks/{page_id}/children`
5. Rate limit: wait 0.5s between batch requests
### Phase 5: Announce
Report back with:
- Research topic
- Brief summary (2-3 key findings)
- Notion page URL: `https://www.notion.so/<page_id_without_dashes>`
## Notes
- Always use `profile="openclaw"` for browser
- Deep Research is under **"工具" (Tools) menu**, NOT the model selector
- If Gemini needs login, announce failure — user must log in manually
- The full pipeline should complete in ~25-30 min total
标签
skill
ai