Add Legal Analyst Squad: AI-powered judicial process analysis system#9
Conversation
…, CNJ e DATAJUD Squad completo de analise processual com 15 agentes especializados em 4 tiers: - Tier 0 (Triagem): barbosa-classifier, fux-procedural, cnj-compliance - Tier 1 (Pesquisa): mendes-researcher, toffoli-aggregator, moraes-analyst, weber-indexer - Tier 2 (Analise): barroso-strategist, fachin-precedent, nunes-quantitative, carmem-relator - Tier 3 (Validacao): theodoro-validator, marinoni-quality, datajud-formatter - Orquestrador: legal-chief Inclui: 8 tasks, 4 checklists, 6 templates YAML, 4 workflows, 6 data/knowledge-base files. Pipeline de 6 fases: Triagem -> Pesquisa -> Analise -> Fundamentacao -> Validacao -> Entrega. Baseado em: diretrizes CNJ, DATAJUD schema, TPU/SGT, CPC Art. 489/926-928, estilo JUSBRASIL. https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
…, agent orchestration, and legal drafting Full-stack web application for the legal-analyst squad: Backend (FastAPI): - PDF upload and processing with PyMuPDF (text extraction, image capture, metadata) - Document store with ID-based cross-referencing and remissao formatting - Chat session management with context windows and phase tracking - Agent engine with intent detection and 15-agent routing pipeline - Document clipping (text excerpts and image regions) by page/coordinates - Legal piece drafting API with reference injection - Agent discovery, search, and skill-based creation endpoint Frontend (React 19 + TypeScript + Tailwind CSS): - Professional dark theme design system (legal-navy, gold accents) - Chatbot interface with markdown rendering, command palette (*intake, *minutar, etc.) - PDF viewer with page thumbnails, search, text selection, and clipping tools - Agent panel organized by tier with search, details, and skill-based agent creation - Legal editor with piece type selection, remissao insertion, clip embedding - Considerations panel for attorney notes and strategic input - Session management with phase tracking (triagem -> entrega) Architecture follows squad patterns: 6-phase pipeline, tier-based agent hierarchy, CNJ compliance, CPC Art. 489 conformance, DATAJUD schema support. https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
…Vercel support - Dockerfile for backend (FastAPI + PyMuPDF on Python 3.12-slim) - Dockerfile for frontend (multi-stage: Node build + Nginx serving) - Nginx config with API proxy, SPA routing, gzip, and static caching - docker-compose.yml with health checks and volume mounts for squad data - Interactive deploy.sh script supporting 5 deploy targets - .env.example with all configurable variables - .dockerignore for clean builds Deploy options: ./deploy.sh local — Docker Compose ./deploy.sh railway — Railway (auto-detects compose) ./deploy.sh fly — Fly.io (generates fly.toml, region: GRU) ./deploy.sh render — Render (generates render.yaml blueprint) ./deploy.sh vercel — Vercel (frontend only, generates vercel.json) https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
- CLAUDE.md: registers squad with entry point, all 12 commands, pipeline docs, and 15 agents reference - config.yaml: formal squad config with tier structure, core principles, commands, quality gates, and metadata - scripts/intake.sh: CLI tool to upload PDF and start pipeline (via API or fallback copy to uploads/) - agent_engine.py: integrate Anthropic API for real agent responses, load agent prompts from agents/*.md, build conversation history, include document context and considerations in system prompt. Template responses kept as fallback when API key is absent. https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
- deploy-hostinger.sh: automated setup for Ubuntu VPS (Docker, Node 20, Nginx reverse proxy, SSL/Let's Encrypt, UFW firewall, systemd auto-restart service) - deploy.sh: add 'hostinger' command with guided VPS setup including scp copy to remote server https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
📝 WalkthroughWalkthroughAdds a complete "legal-analyst" squad: extensive documentation, agent specifications, tasks, templates and checklists; DATAJUD/CNJ references; workflow YAMLs; a FastAPI backend (PDF processing, document store, agent orchestration, Stripe); a React + Vite frontend (chat, PDF viewer, editor, agent UI); Docker, CLI, and deployment scripts. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant User as "User (Browser / CLI)"
participant Frontend as "Frontend UI"
participant API as "FastAPI Backend"
participant Engine as "Agent Engine"
participant DocStore as "DocumentStore"
participant Model as "Anthropic API"
rect rgba(200,230,255,0.5)
User->>Frontend: Upload PDF or send command (e.g. "*intake")
Frontend->>API: POST /api/documents/upload (file, session_id)
API->>DocStore: add_document(filepath) -> metadata + pages
API-->>Frontend: upload response (doc_id, pages)
end
rect rgba(220,255,200,0.5)
Frontend->>API: POST /api/chat (session_id, "*intake" or command)
API->>Engine: process_message(session_id, content, references)
Engine->>DocStore: resolve_reference(s) (clips/pages/text)
Engine->>Model: request(system_prompt + agent_prompt + context)
alt model responds
Model-->>Engine: agent reply
else fallback
Engine-->>Engine: generate fallback template response
end
Engine-->>API: agent response (message, references)
API-->>Frontend: chat response
Frontend-->>User: render message, enable clips/drafts/actions
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 4
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟡 Minor comments (13)
squads/legal-analyst/CLAUDE.md-7-9 (1)
7-9:⚠️ Potential issue | 🟡 MinorAdd language specification to fenced code block.
Markdown best practice requires fenced code blocks to specify a language for proper syntax highlighting and rendering.
📝 Proposed fix
-``` +```text `@legal-analyst` {qualquer descricao em linguagem natural}</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@squads/legal-analyst/CLAUDE.mdaround lines 7 - 9, The fenced code block in
the CLAUDE.md containing the line "@legal-analyst {qualquer descricao em
linguagem natural}" lacks a language tag; update that code fence to include a
language specifier (e.g., use "text") so the block becomes a fenced code block
with a language for proper rendering and syntax highlighting around the
"@legal-analyst {qualquer descricao em linguagem natural}" content.</details> </blockquote></details> <details> <summary>squads/legal-analyst/webapp/.env.example-8-10 (1)</summary><blockquote> `8-10`: _⚠️ Potential issue_ | _🟡 Minor_ **Use an obviously fake API key placeholder for consistency.** Line 9 uses `sk-ant-api03-...`, which mimics the real Anthropic API key format. While the `...` suffix makes it clearly incomplete, other optional API keys in this file use empty values (e.g., `JUSBRASIL_API_KEY=`). For consistency and clarity, use an obviously fictional value like `your_anthropic_api_key_here` or an empty value. <details> <summary>Suggested change</summary> ```diff -ANTHROPIC_API_KEY=sk-ant-api03-... +ANTHROPIC_API_KEY=your_anthropic_api_key_here🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/.env.example` around lines 8 - 10, Replace the realistic-looking Anthropic API key value used by ANTHROPIC_API_KEY with an obviously fake placeholder or an empty value to match other entries; update the ANTHROPIC_API_KEY assignment so it uses a clear placeholder like your_anthropic_api_key_here or just leave it empty (e.g., ANTHROPIC_API_KEY=) and keep ANTHROPIC_MODEL unchanged to maintain consistency with other optional API keys in the file.squads/legal-analyst/data/legal-kb.md-69-81 (1)
69-81:⚠️ Potential issue | 🟡 MinorMinor: Missing diacritics on "súmula".
The word "sumula" appears multiple times without the required accent. In Portuguese legal terminology, it should be "súmula" (with accent on the 'u'). This affects lines 69, 71, 79, and 81.
📝 Suggested fix
-V — se limitar a invocar precedente ou enunciado de sumula, sem identificar seus fundamentos determinantes nem demonstrar que o caso sob julgamento se ajusta aqueles fundamentos; +V — se limitar a invocar precedente ou enunciado de súmula, sem identificar seus fundamentos determinantes nem demonstrar que o caso sob julgamento se ajusta àqueles fundamentos; -VI — deixar de seguir enunciado de sumula, jurisprudencia ou precedente invocado pela parte, sem demonstrar a existencia de distincao no caso em julgamento ou a superacao do entendimento. +VI — deixar de seguir enunciado de súmula, jurisprudência ou precedente invocado pela parte, sem demonstrar a existência de distinção no caso em julgamento ou a superação do entendimento.Similar corrections needed for lines 79 and 81.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/data/legal-kb.md` around lines 69 - 81, Replace all instances of "sumula" with the correct Portuguese term "súmula": update the occurrences in the items labeled "V — se limitar..." and "VI — deixar de seguir..." and in the CPC text under "Art. 927" (specifically the entries II and IV referencing enunciados de súmula); ensure accents are added to every "sumula" token in the file so "súmula" is used consistently.squads/legal-analyst/workflows/wf-pesquisa-jurisprudencial.yaml-14-17 (1)
14-17:⚠️ Potential issue | 🟡 MinorCreate a task definition file for
classificar-temaor document why it's inline-only.
classificar-tema(line 15) is a distinct task fromclassificar-processoand serves a different purpose—identifying legal themes and search terms rather than classifying judicial processes. However,classificar-temalacks a formal task definition file while all other tasks in the squad follow the pattern of having dedicated definition files (e.g.,classificar-processo.md). For consistency, either createtasks/classificar-tema.mddocumenting this task, or add a note explaining why it's defined inline in the workflow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/workflows/wf-pesquisa-jurisprudencial.yaml` around lines 14 - 17, The workflow introduces a standalone task named "classificar-tema" but no dedicated task definition exists; either add a new task definition document for classificar-tema (matching the pattern used for classificar-processo) including description, inputs, outputs (classificacao, termos_busca), and expected behavior, or update the workflow file with a short inline comment explaining why classificar-tema is intentionally inline-only so reviewers know this deviation is deliberate and consistent.squads/legal-analyst/webapp/frontend/src/hooks/usePDF.ts-28-34 (1)
28-34:⚠️ Potential issue | 🟡 MinorThe page cache collides across different PDFs.
prev.find((p) => p.page_number === pageNum)treats page1from every document as the same entry.loadPage()still returns the fresh response today, but the cachedpagesstate becomes incorrect as soon as two documents are open. Cache by(docId, pageNum)instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/hooks/usePDF.ts` around lines 28 - 34, The pages cache in usePDF.ts collides across documents because loadPage uses prev.find(p => p.page_number === pageNum); update the existence check and cache key to include the document id so pages are unique per document (e.g., check both docId and page_number or use a composite key), modify the setPages callback in loadPage to use prev.find(p => p.doc_id === docId && p.page_number === pageNum) (or equivalent) and ensure newly pushed page objects include the doc_id so the per-(docId,pageNum) deduping works.squads/legal-analyst/webapp/frontend/src/components/MessageBubble.tsx-110-123 (1)
110-123:⚠️ Potential issue | 🟡 MinorGuard against undefined
attachmentsarray.Same concern as references - if
message.attachmentscould be undefined, this will throw.🛡️ Suggested defensive fix
- {message.attachments.length > 0 && ( + {message.attachments?.length > 0 && (🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/MessageBubble.tsx` around lines 110 - 123, Guard against message.attachments being undefined in the MessageBubble rendering: change the condition and mapping to safely handle undefined by using optional chaining or a default empty array (e.g. use (message.attachments?.length ?? 0) > 0 for the conditional and (message.attachments ?? []).map(...) when rendering) so the JSX that references message.attachments and the map over att, idx cannot throw if attachments is undefined.squads/legal-analyst/agents/mendes-researcher.md-38-46 (1)
38-46:⚠️ Potential issue | 🟡 MinorAlign the trigger with the documented filter syntax.
pesquisar-jurisprudenciadeclarestribunalandperiodo, but the trigger only advertises{tema}while the example later uses--tribunal=STJ. That leaves the command contract ambiguous for callers and any parser/prompt-template consuming this spec.Suggested fix
- trigger: "*pesquisar-jurisprudencia {tema}" + trigger: "*pesquisar-jurisprudencia {tema} [--tribunal=STF|STJ|TJ-XX|TRF-X] [--periodo=AAAA-AAAA]"Also applies to: 137-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/agents/mendes-researcher.md` around lines 38 - 46, The trigger for the pesquisar-jurisprudencia action is inconsistent with its inputs: it only exposes "{tema}" while inputs declare optional tribunal and periodo (and examples use --tribunal). Update the trigger string for pesquisar-jurisprudencia to advertise the optional flags (e.g., "--tribunal={tribunal} --periodo={periodo}" or a generic flags placeholder) so it matches the inputs section and any consumers (parsers/prompt-templates) can unambiguously map parameters; ensure the trigger syntax you choose is applied consistently for the other occurrence referenced (lines ~137).squads/legal-analyst/webapp/frontend/src/components/AgentPanel.tsx-71-90 (1)
71-90:⚠️ Potential issue | 🟡 MinorCreation failures are currently invisible to the user.
A rejected
onCreateonly clears the spinner infinally; there is no inline error or toast, so the action looks like a no-op. Please catch the rejection and surface a retryable error state here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/AgentPanel.tsx` around lines 71 - 90, The handleCreate handler currently only clears the spinner in finally and swallows rejections from onCreate; add error handling to catch failures, set an error state (e.g., createError via useState) and surface it to the UI (inline message or toast) so users see the failure and can retry; specifically wrap the await onCreate(...) in try/catch, call setCreateError with the caught error message and keep setCreating(false) in finally, and keep existing success behavior (setShowCreate(false), reset fields) only on success so failed creates do not clear form inputs.squads/legal-analyst/webapp/frontend/src/components/ChatInterface.tsx-60-69 (1)
60-69:⚠️ Potential issue | 🟡 MinorThe
/shortcut opens and closes the command palette in the same keystroke.
handleKeyDownenables the palette on/, buthandleInputimmediately disables it because/does not start with*. Either support both prefixes consistently or remove the slash shortcut to avoid a dead path.Simple fix if `*` is the only supported prefix
- if (e.key === "/" && input === "") { - setShowCommands(true); - }Also applies to: 73-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/ChatInterface.tsx` around lines 60 - 69, The slash key shortcut currently toggles the command palette in handleKeyDown but handleInput immediately hides it because it only treats inputs starting with "*" as command prefixes; either make the shortcuts consistent or remove the "/" shortcut. Fix by updating either handleKeyDown (replace the "/" check with "*" or remove it) or handleInput (accept inputs starting with "/" in addition to "*") and ensure setShowCommands is only called when the resulting input prefix logic matches (functions: handleKeyDown, handleInput; state: input, setShowCommands). Ensure the same prefix rule is applied in both places so the palette isn't opened and then immediately closed.squads/legal-analyst/webapp/frontend/src/components/PDFViewer.tsx-71-77 (1)
71-77:⚠️ Potential issue | 🟡 MinorPages beyond the first 10 never request thumbnails.
This effect preloads only 10 pages, but the sidebar renders every page and never asks for missing thumbnails later. On long PDFs, everything after page 10 stays a placeholder forever.
Suggested fix
useEffect(() => { if (activeDoc) { - for (let i = 1; i <= Math.min(activeDoc.total_pages, 10); i++) { - onLoadThumbnail(activeDoc.doc_id, i); - } + const pages = new Set<number>([activePage]); + for (let i = 1; i <= Math.min(activeDoc.total_pages, 10); i++) { + pages.add(i); + } + pages.forEach((page) => onLoadThumbnail(activeDoc.doc_id, page)); } - }, [activeDoc, onLoadThumbnail]); + }, [activeDoc, activePage, onLoadThumbnail]);Also applies to: 303-334
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/PDFViewer.tsx` around lines 71 - 77, The current useEffect in PDFViewer.tsx only preloads the first 10 pages (using activeDoc.total_pages and onLoadThumbnail), leaving later pages as placeholders; change the thumbnail-loading strategy so thumbnails are requested when their page components are rendered or become visible instead of only in that effect—remove or relax the hard cap in the effect and/or add a call to onLoadThumbnail(activeDoc.doc_id, pageNumber) from the thumbnail render/map logic (or from a Thumbnail component's mount/visibility handler) so every page (using activeDoc.doc_id and page index) triggers a load when displayed; ensure you keep the existing prefetch for the first N pages but fallback to per-page lazy requests for pages >10.squads/legal-analyst/agents/toffoli-aggregator.md-55-61 (1)
55-61:⚠️ Potential issue | 🟡 Minor
mapear-repetitivosis missing its execution steps.Every other command in these agent specs includes a
stepssection, but this one ends atoutput. That makes the command incomplete and is the likeliest place for a schema/rendering mismatch downstream.Suggested fix
mapear-repetitivos: trigger: "*mapear-repetitivos {tema}" description: > Mapear temas repetitivos STF/STJ aplicaveis. inputs: - tema: string output: repetitivos-mapa.md + steps: + - Pesquisar temas de repercussao geral e recursos repetitivos sobre o tema + - Identificar tese fixada, orgao julgador e status de julgamento + - Verificar aderencia dos temas ao caso analisado + - Consolidar efeitos vinculantes e divergencias residuais🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/agents/toffoli-aggregator.md` around lines 55 - 61, The command definition for mapear-repetitivos is missing the required steps section, causing schema/rendering issues; add a steps block for mapear-repetitivos (matching the structure used by other commands) that enumerates the execution flow for the trigger "*mapear-repetitivos {tema}" and uses the input tema to produce output repetitivos-mapa.md — update the mapear-repetitivos entry to include a clear ordered steps list (e.g., preprocessing/validation of tema, core mapping logic, and writing/returning repetitivos-mapa.md) so the agent spec is complete and consistent with the other command entries.squads/legal-analyst/webapp/backend/core/document_store.py-102-111 (1)
102-111:⚠️ Potential issue | 🟡 MinorAdd error handling for malformed
page_rangeparsing.The
page_rangeparsing assumes valid format like"1-5"but doesn't handle malformed input. Invalid values will raiseValueErroror produce unexpected behavior.if ref.page_range: parts = ref.page_range.split("-") - if len(parts) == 2: - start, end = int(parts[0]), int(parts[1]) + if len(parts) == 2 and parts[0].isdigit() and parts[1].isdigit(): + start, end = int(parts[0]), int(parts[1]) + if start > end: + start, end = end, start # swap if reversed texts = [] for pn in range(start, end + 1):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/core/document_store.py` around lines 102 - 111, The page_range parsing in the block using ref.page_range (and populating result["content"] via get_page) can raise ValueError for malformed inputs; update the code to validate and sanitize ref.page_range before converting to ints: check it contains exactly one "-" and both sides are non-empty numeric strings (use .strip() and .isdigit() or try int() inside a try/except), ensure start <= end (or swap/return empty), and handle out-of-range pages by skipping missing pages; on parse failure catch the exception and either set result["content"] to an empty string or log the error and skip population so the method using get_page remains safe.squads/legal-analyst/webapp/backend/core/pdf_processor.py-130-137 (1)
130-137:⚠️ Potential issue | 🟡 MinorContext extraction may return misaligned text.
The code uses
context.lower().find(query.lower())to locate the query position, butpage.search_for(query)returns multiple rectangle instances that may be at different positions. The context slice will always be relative to the first occurrence, not the specificinstbeing processed.if text_instances: - context = page.get_text("text") for inst in text_instances: - start = max(0, context.lower().find(query.lower()) - 100) - end = min(len(context), start + len(query) + 200) + # Extract text around the match using the instance rectangle + expanded_rect = fitz.Rect(inst.x0 - 50, inst.y0 - 30, inst.x1 + 50, inst.y1 + 30) + context = page.get_text("text", clip=expanded_rect) results.append({ "page": page_num + 1, "rect": [inst.x0, inst.y0, inst.x1, inst.y1], - "context": context[start:end].strip(), + "context": context.strip()[:300], })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/core/pdf_processor.py` around lines 130 - 137, The context slice is using the first match of query.lower() instead of the specific occurrence represented by each inst; update the loop over text_instances to locate the start index for each inst by finding the inst's exact text within context (e.g., derive inst_text from the inst object and use context.lower().find(inst_text.lower()) or fall back to context.find(query.lower()) if inst text is unavailable), then compute start/end from that per-inst index and append the correct per-inst "context" into results (variables to change: text_instances loop, inst, context, query, results).
🧹 Nitpick comments (25)
squads/legal-analyst/CLAUDE.md (1)
16-16: Consider correcting Portuguese accent marks for improved professionalism.Several Portuguese words are missing required accent marks:
- Line 16: "Constroi" → "Constrói"
- Line 144: "Analise" → "Análise"
- Line 167: "influencia" → "influencia"
While the meaning remains clear, adding proper accents enhances professionalism and readability for Portuguese-speaking users.
📝 Proposed corrections
-5. **Constroi** a estrategia argumentativa com fundamentacao CPC Art. 489 +5. **Constrói** a estrategia argumentativa com fundamentacao CPC Art. 489-| `@fachin-precedent` | Min. Edson Fachin | Analise de precedentes | +| `@fachin-precedent` | Min. Edson Fachin | Análise de precedentes |-- **RELATOR-AWARE** — Posicionamento do Relator influencia resultado +- **RELATOR-AWARE** — Posicionamento do Relator influencia resultadoAlso applies to: 144-144, 167-167
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/CLAUDE.md` at line 16, Replace the three Portuguese words missing accent marks: change "Constroi" to "Constrói", "Analise" to "Análise", and "influencia" to "influência" in the CLAUDE.md content (search for the exact tokens "Constroi", "Analise", and "influencia" to locate each occurrence) so the document uses correct Portuguese diacritics and improves readability and professionalism.squads/legal-analyst/webapp/frontend/tsconfig.json (1)
24-24: Addvite.config.tsto the TypeScript include scope.The current
include: ["src"]does not covervite.config.ts, which is used during the build (tsc -bin package.json scripts). No companiontsconfig.node.jsonexists to type-check it separately. Include it in the main tsconfig or create a companion config with project references.Minimal fix
- "include": ["src"] + "include": ["src", "vite.config.ts"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/tsconfig.json` at line 24, Update the TypeScript project scope so vite.config.ts is type-checked: modify the tsconfig.json include array (the "include" property referenced in tsconfig.json) to add "vite.config.ts" (or alternately add a companion tsconfig.node.json and reference it from package.json build scripts used by tsc -b); ensure vite.config.ts is covered by the project either by adding it to the existing include or by creating a separate config and adding the appropriate project reference so the file is compiled/checked during tsc -b.squads/legal-analyst/webapp/frontend/src/hooks/useAgents.ts (2)
31-38: Search fallback may produce empty results if agents haven't loaded.If the API search fails and
agentsstate is empty (e.g., initial render beforeloadAgentscompletes), the local filter will always return an empty array. Consider usingDEFAULT_AGENTSas the fallback source.♻️ Use DEFAULT_AGENTS as fallback source
} catch { - const filtered = agents.filter( + const source = agents.length > 0 ? agents : DEFAULT_AGENTS; + const filtered = source.filter( (a) => a.name.toLowerCase().includes(query.toLowerCase()) || a.description.toLowerCase().includes(query.toLowerCase()), ); setSearchResults(filtered); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/hooks/useAgents.ts` around lines 31 - 38, The catch block currently filters the in-memory agents array which can be empty before loadAgents finishes, producing an empty fallback; change the fallback source to DEFAULT_AGENTS (or a constant providing seed agents) by using something like const source = agents.length ? agents : DEFAULT_AGENTS and then filter source instead of agents, ensure DEFAULT_AGENTS is imported/available in useAgents.ts and keep the same case-insensitive checks and setSearchResults(filtered).
15-17: Silent error handling may hide issues during development.The empty catch block swallows API errors without logging. While the fallback to
DEFAULT_AGENTSis a good resilience pattern, consider adding logging for debugging purposes.♻️ Add error logging
} catch { // Fallback with static agent list + console.warn('Failed to load agents from API, using fallback list'); setAgents(DEFAULT_AGENTS); } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/hooks/useAgents.ts` around lines 15 - 17, The catch block in useAgents (around where setAgents(DEFAULT_AGENTS) is used) swallows errors silently; update the catch to accept the error (e) and log it before falling back to DEFAULT_AGENTS—e.g., call console.error or the project logger with a clear message including the error and context (e.g., "Failed to fetch agents in useAgents:"), then call setAgents(DEFAULT_AGENTS) as the existing fallback.squads/legal-analyst/webapp/frontend/src/components/Header.tsx (2)
54-56: Settings button has no onClick handler.The Settings button is rendered but does nothing when clicked. Consider adding a handler or removing it until functionality is implemented.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/Header.tsx` around lines 54 - 56, The Settings button in the Header component currently has no onClick and therefore does nothing; either wire it to a handler or remove it until implemented. Add an onClick prop to the button that calls a new handler (e.g., openSettings or handleSettingsClick) defined inside the Header component (or accept an onOpenSettings prop) to trigger the intended behavior, and include an aria-label (e.g., "Open settings") for accessibility; if no behavior exists yet, remove the button to avoid dead UI.
48-53: Dark mode toggle is non-functional.The
isDarkstate toggles but doesn't apply any theme changes (e.g., adding/removing adarkclass ondocument.documentElementor persisting to localStorage). Currently this creates a UI element that doesn't work.If this is intentional placeholder behavior, consider either:
- Removing the toggle until fully implemented
- Adding a TODO comment explaining the planned implementation
- Implementing basic dark mode support
♻️ Basic dark mode implementation
- const [isDark, setIsDark] = useState(true); + const [isDark, setIsDark] = useState(() => { + if (typeof window !== 'undefined') { + return document.documentElement.classList.contains('dark'); + } + return true; + }); + + const toggleDarkMode = () => { + setIsDark((prev) => { + const next = !prev; + document.documentElement.classList.toggle('dark', next); + return next; + }); + };Then update the button:
- onClick={() => setIsDark(!isDark)} + onClick={toggleDarkMode}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/Header.tsx` around lines 48 - 53, The dark-mode toggle only flips the local isDark state in Header.tsx but doesn't apply or persist a theme; update the implementation so toggling via setIsDark adds/removes the "dark" class on document.documentElement and saves the selected theme to localStorage (e.g., key "theme"), and initialize isDark from localStorage (or system preference) on mount (useEffect) so the theme is applied on load; reference the isDark state, setIsDark handler, and the toggle button JSX to locate where to add the effect and persistence logic.squads/legal-analyst/tasks/classificar-processo.md (1)
11-16: Minor: Add blank line before markdown table.Per markdown best practices (MD058), tables should be surrounded by blank lines.
📝 Suggested fix
## Inputs + | Parametro | Tipo | Obrigatorio | Descricao |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/tasks/classificar-processo.md` around lines 11 - 16, The markdown table under the "## Inputs" heading lacks surrounding blank lines (MD058); edit the "Inputs" section to insert a blank line immediately before the table and ensure there's a blank line after the table as well so the table is separated from surrounding text; locate the table block (the lines starting with "| Parametro | Tipo | Obrigatorio | Descricao |") and add the blank lines around it.squads/legal-analyst/tasks/perfil-relator.md (1)
11-15: Minor: Add blank line before markdown table.Per markdown best practices (MD058), tables should be surrounded by blank lines.
📝 Suggested fix
## Inputs + | Parametro | Tipo | Obrigatorio | Descricao |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/tasks/perfil-relator.md` around lines 11 - 15, The markdown table under the "## Inputs" heading needs a blank line above it to satisfy MD058; edit the section so there is an empty line between the "## Inputs" header and the table row starting with "| Parametro | Tipo | Obrigatorio | Descricao |" (and ensure a blank line after the table if not already present) so the table is properly separated from the heading.squads/legal-analyst/tasks/jurimetria.md (1)
11-16: Minor: Add blank line before markdown table.Per markdown best practices (MD058), tables should be surrounded by blank lines for consistent rendering across parsers.
📝 Suggested fix
## Inputs + | Parametro | Tipo | Obrigatorio | Descricao | |-----------|------|-------------|-----------|🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/tasks/jurimetria.md` around lines 11 - 16, Insert a blank line between the "## Inputs" heading and the markdown table so the table is separated from the heading (i.e., add an empty line before the table that begins with "| Parametro | Tipo | Obrigatorio | Descricao |"); this will satisfy MD058 and ensure consistent rendering across markdown parsers.squads/legal-analyst/templates/fundamentacao-tmpl.yaml (1)
41-43: Normalizeprincipios_em_colisaoto a single schema shape.This currently serializes as a list of one-key objects, which is awkward to consume downstream. Since the field is plural, a flat sequence is the simpler contract here.
♻️ Suggested template cleanup
principios_em_colisao: - - principio_1: "{P1}" - - principio_2: "{P2}" + - "{P1}" + - "{P2}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/templates/fundamentacao-tmpl.yaml` around lines 41 - 43, The current principios_em_colisao serializes as a list of single-key objects (principio_1: "{P1}", principio_2: "{P2}"), which is awkward to consume; change it to a flat sequence so every item has the same shape—for example replace the list of one-key maps under principios_em_colisao with a simple YAML sequence of placeholders (e.g., - "{P1}", - "{P2}"). Update any code that reads principios_em_colisao to expect an array of strings instead of an array of single-key objects and keep the field name principios_em_colisao and placeholders {P1}/{P2} intact.squads/legal-analyst/README.md (2)
199-278: Add language specifier to file structure code block.Same issue as the architecture diagram - add
textorplaintextas the language specifier.📝 Suggested fix
-``` +```text squads/legal-analyst/ +-- agents/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/README.md` around lines 199 - 278, The README's directory-tree code block (the fenced block beginning with ``` showing "squads/legal-analyst/") lacks a language specifier; update the opening fence from ``` to a plain-text specifier like ```text or ```plaintext so the tree is treated as plain text (edit the fenced block that starts before "squads/legal-analyst/" and keep the closing ``` unchanged).
27-43: Add language specifier to fenced code block.The architecture diagram code block lacks a language specifier. While this is primarily ASCII art, adding a specifier (e.g.,
textorplaintext) satisfies markdown linting rules and improves consistency.📝 Suggested fix
-``` +```text +-------------------------------------------------------------+ | CAMADA 1: MOTOR PERMANENTE (frameworks juridicos) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/README.md` around lines 27 - 43, Update the fenced code block containing the ASCII architecture diagram in the README to include a language specifier (e.g., `text` or `plaintext`) immediately after the opening triple backticks so the block becomes ```text ... ``` which satisfies markdown linting; locate the ASCII block (starts with the +---- diagram and includes "CAMADA 1"/"CAMADA 2" and the PIPELINE line) and add the specifier to the opening fence only.squads/legal-analyst/tasks/analisar-processo-completo.md (1)
11-17: Add blank line before table for markdown consistency.The inputs table should be surrounded by blank lines per markdown best practices.
📝 Suggested fix
## Inputs + | Parametro | Tipo | Obrigatorio | Descricao |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/tasks/analisar-processo-completo.md` around lines 11 - 17, The Markdown heading "## Inputs" currently has the table immediately after it with no blank line; add a single blank line between the "## Inputs" heading and the table so the table is separated per Markdown best practices (locate the "## Inputs" header in squads/legal-analyst/tasks/analisar-processo-completo.md and insert one blank line before the table's first row).squads/legal-analyst/webapp/frontend/src/hooks/useChat.ts (1)
28-29: Consider typed error handling instead ofany.Using
anyfor caught errors loses type safety. Consider using a type guard orunknownwith proper narrowing.♻️ Suggested improvement
- } catch (e: any) { - setError(e.message); + } catch (e: unknown) { + setError(e instanceof Error ? e.message : "Unknown error");Also applies to: 42-43, 82-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/hooks/useChat.ts` around lines 28 - 29, Replace the untyped catches in the useChat hook: instead of using catch (e: any) { setError(e.message); } (and the similar blocks at the other sites), change to catch (e: unknown) and narrow the error with a type guard before calling setError; e.g., check if e instanceof Error then call setError(e.message) and otherwise call setError(String(e)) or a fallback message — apply this to all catch blocks referencing setError in useChat (the ones at the shown locations).squads/legal-analyst/templates/processo-analise-tmpl.yaml (1)
77-107: Keep numeric fields numeric in this YAML contract.Metrics and scores are currently modeled as presentation strings like
"{PERCENTUAL}"and"{SCORE}/6". Any consumer that needs thresholding, sorting, or aggregation will have to parse display text back into numbers. Keep raw numeric fields in the template and derive human-readable formatting later.♻️ Suggested refactor
jurimetria: - total_acordaos: "{N}" - taxa_procedencia: "{PERCENTUAL}" - valor_medio: "{VALOR}" - valor_mediano: "{VALOR}" + total_acordaos: 0 # {N} + taxa_procedencia_percentual: 0.0 # {PERCENTUAL} + valor_medio: 0.0 # {VALOR} + valor_mediano: 0.0 # {VALOR} tendencia_temporal: "{TENDENCIA}" validacao: - art_489_score: "{SCORE}/6" - qualidade_precedentes: "{SCORE}/8" - cnj_compliance: "{SCORE}/6" + art_489_score: 0 + art_489_score_max: 6 + qualidade_precedentes_score: 0 + qualidade_precedentes_score_max: 8 + cnj_compliance_score: 0 + cnj_compliance_score_max: 6 parecer_final: "{APROVADO|REPROVADO|REVISAO}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/templates/processo-analise-tmpl.yaml` around lines 77 - 107, The template uses presentation strings for numeric/enum fields (e.g., jurimetria.total_acordaos, jurimetria.taxa_procedencia, jurimetria.valor_medio, jurimetria.valor_mediano, validacao.art_489_score, validacao.qualidade_precedentes, validacao.cnj_compliance, conformidade_cnj.datajud_schema, parecer_final, resolucoes_aplicadas.status); change these keys to hold raw numeric (integers/floats) or canonical enum/boolean values instead of human-readable strings (remove formats like "{PERCENTUAL}" and "{SCORE}/6" and use numeric fields and enum tokens), leaving any display formatting to be performed by the consumer/presentation layer. Ensure field names remain the same so only the value types change (e.g., art_489_score: number, taxa_procedencia: number (0-100 or 0-1), parecer_final: enum), and update schema/tests that validate this template accordingly.squads/legal-analyst/webapp/frontend/src/components/LegalEditor.tsx (3)
46-52: Consider adding JSDoc documentation for this component.Based on learnings for this repository, React components should be documented with JSDoc comments including description, props, and usage examples.
📝 Suggested JSDoc
+/** + * LegalEditor - Editor for constructing legal drafting pieces. + * + * `@param` props.documents - List of loaded document metadata for reference insertion + * `@param` props.clips - Available document clips for content insertion + * `@param` props.onDraft - Callback to generate draft with piece type, considerations, instructions, clips, and references + * `@param` props.onReport - Callback to generate strategic report + * `@param` props.isLoading - Loading state for draft generation + */ export default function LegalEditor({ documents, clips,Based on learnings: "Document components with JSDoc comments including description, props, and usage examples" for files matching
squads/apex/**/*.{jsx,tsx}.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/LegalEditor.tsx` around lines 46 - 52, Add a JSDoc block above the LegalEditor component that documents the component purpose, describes each prop (documents, clips, onDraft, onReport, isLoading) and their types, and includes a short usage example showing how to render <LegalEditor /> with typical props; reference the component name LegalEditor and the prop names so reviewers can locate the comment easily.
167-175: Toolbar buttons are non-functional placeholders.The Bold, AlignLeft, and List buttons render but have no
onClickhandlers. Consider either implementing the functionality or removing/disabling them to avoid confusing users.Would you like me to open an issue to track implementing rich text formatting, or should these buttons be removed for now?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/LegalEditor.tsx` around lines 167 - 175, The toolbar buttons rendering Bold, AlignLeft, and List in LegalEditor.tsx are non-functional; either wire them to the editor commands or disable/remove them to avoid confusion. Implement concise handlers (e.g., add handleToggleBold, handleAlignLeft, handleToggleList) that call your editor API (for example editor.chain().focus().toggleBold(), editor.chain().focus().setTextAlign('left'), editor.chain().focus().toggleBulletList() or equivalent) and attach them as onClick on the Bold, AlignLeft, and List buttons, making sure to check editor existence and update button aria-pressed state; alternatively, remove the buttons or set disabled/aria-disabled and adjust styling so they are clearly non-interactive.
68-68: Specify radix inparseIntcalls for clarity.While
parseIntdefaults to base 10 for decimal strings, explicitly specifying the radix improves readability and avoids edge cases.- page: refPage ? parseInt(refPage) : undefined, + page: refPage ? parseInt(refPage, 10) : undefined,Also applies to line 244:
- insertReference(refDocId, refPage ? parseInt(refPage) : undefined) + insertReference(refDocId, refPage ? parseInt(refPage, 10) : undefined)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/components/LegalEditor.tsx` at line 68, Update both parseInt usages to explicitly pass a radix of 10: change parseInt(refPage) to parseInt(refPage, 10) and likewise add , 10 to the other parseInt call referenced on line 244 so both calls explicitly use base-10 parsing.squads/legal-analyst/webapp/backend/core/pdf_processor.py (1)
26-34: Consider making image extraction optional.All embedded images are extracted to disk unconditionally, which could be expensive for image-heavy PDFs. Consider adding a flag to control this behavior or deferring extraction until needed.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/core/pdf_processor.py` around lines 26 - 34, The current loop in pdf_processor.py unconditionally writes every embedded image to disk (image_paths, page.get_images, doc.extract_image, CLIPS_DIR, filepath), which is costly for image-heavy PDFs; add a configurable boolean flag (e.g., extract_images=False) to the surrounding function or class and wrap the extraction logic so images are only written when that flag is true (or alternatively collect image metadata and defer writing until explicitly requested), ensuring callers can opt-in to extraction and preserving existing variable names (image_paths, img_path) and behavior when the flag is enabled.squads/legal-analyst/webapp/backend/core/chat_manager.py (2)
67-69: Consider defining a custom exception class for session errors.Per static analysis (TRY003), long exception messages should be defined in exception classes. This is a minor style issue but improves consistency.
class SessionNotFoundError(ValueError): """Raised when a session is not found.""" def __init__(self, session_id: str): super().__init__(f"Session {session_id} not found")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/core/chat_manager.py` around lines 67 - 69, Create a custom exception class SessionNotFoundError (subclassing ValueError) with a constructor that accepts session_id and formats the message, then replace the current ValueError raise in ChatManager where session = self._sessions.get(session_id) (and the subsequent raise ValueError(f"Session {session_id} not found")) to raise SessionNotFoundError(session_id) instead; update any imports or tests that expect the old ValueError if necessary.
4-4: Remove unused import.The
jsonmodule is imported but never used.-import json import uuid🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/core/chat_manager.py` at line 4, Remove the unused import of the json module from the top of the file: delete or omit the line "import json" in chat_manager.py so there are no unused imports remaining.squads/legal-analyst/workflows/wf-analise-processual-completa.yaml (2)
43-45: Output name contains parentheses which may cause parsing issues.The output
via_alternativa (se inadmissivel)includes a parenthetical comment. If outputs are used as variable identifiers downstream, this could cause failures.Consider using a clean identifier with the note in description:
outputs: - admissibilidade_report - - via_alternativa (se inadmissivel) + - via_alternativa # populated only if inadmissible🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/workflows/wf-analise-processual-completa.yaml` around lines 43 - 45, The output name "via_alternativa (se inadmissivel)" contains parentheses which can break downstream parsing; rename the output to a safe identifier such as "via_alternativa" (or "via_alternativa_inadmissivel" if you need uniqueness) in the outputs list and move the conditional note "(se inadmissivel)" into the workflow or output description metadata so the identifier is clean while the human-readable condition is preserved; update any references to the old name (e.g., usages of via_alternativa (se inadmissivel)) to the new identifier in the workflow logic.
294-299: Output path contains embedded text that may not parse cleanly.The output
"minds/{tema}/ (arquivo completo)"mixes a path template with a comment. Consider separating:outputs: - - "minds/{tema}/ (arquivo completo)" + - "minds/{tema}/" # arquivo completo🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/workflows/wf-analise-processual-completa.yaml` around lines 294 - 299, The outputs entry currently embeds a comment inside the path string ("minds/{tema}/ (arquivo completo)"), which can break parsing; update the outputs list to use a clean path template like "minds/{tema}/" (remove the " (arquivo completo)" suffix) and move any human-readable note into a separate descriptive field or comment outside the YAML string (or add a dedicated metadata/description property if the workflow schema supports it) so that the outputs key contains only the valid path template; reference the outputs entry and the literal "minds/{tema}/ (arquivo completo)" to locate and fix this.squads/legal-analyst/webapp/backend/agents/loader.py (2)
58-62: Add defensive check for domain list concatenation.If
primaryorsecondarykeys exist but contain non-list values (e.g., a string), the concatenation on line 60 will raise aTypeError.if isinstance(domains, dict): - expertise = domains.get("primary", []) + domains.get("secondary", []) + primary = domains.get("primary", []) + secondary = domains.get("secondary", []) + if isinstance(primary, list) and isinstance(secondary, list): + expertise = primary + secondary elif isinstance(domains, list): expertise = domains🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/agents/loader.py` around lines 58 - 62, The current code in loader.py builds expertise by concatenating persona.get("expertise_domains", {}) -> domains then domains.get("primary", []) + domains.get("secondary", []), which can raise a TypeError if primary/secondary are not lists; update the logic in the block handling domains to defensively validate or coerce primary and secondary to lists before concatenation (e.g., treat non-list values as single-item lists or fallback to []), ensuring you check isinstance(..., list) for each of domains.get("primary") and domains.get("secondary") and only then perform the addition and assign to expertise.
118-124: Substring match for workflow ID may cause unintended matches.Using
workflow_id in wf_file.stemcould match multiple workflows. For example, searching for"validacao"would match bothwf-validacao.yamlandwf-pre-validacao.yaml.Consider using exact match or a more precise pattern:
def load_workflow(workflow_id: str) -> dict | None: """Load a workflow YAML definition.""" for wf_file in WORKFLOWS_DIR.glob("*.yaml"): - if workflow_id in wf_file.stem: + if wf_file.stem == workflow_id or wf_file.stem == f"wf-{workflow_id}": content = wf_file.read_text(encoding="utf-8") return yaml.safe_load(content) return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/agents/loader.py` around lines 118 - 124, The current load_workflow function uses a substring check ("workflow_id in wf_file.stem") which can return unintended files; update load_workflow to perform a precise match against wf_file.stem (or a well-defined pattern) instead—e.g., compare workflow_id == wf_file.stem or use a strict pattern match with fnmatch/regex that accounts for the expected "wf-" prefix/suffix—so that only the exact workflow file (referenced by WORKFLOWS_DIR and wf_file.stem) is returned.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 135ee8a6-b956-45e4-992f-39ce23466c38
📒 Files selected for processing (91)
squads/legal-analyst/CLAUDE.mdsquads/legal-analyst/README.mdsquads/legal-analyst/agents/barbosa-classifier.mdsquads/legal-analyst/agents/barroso-strategist.mdsquads/legal-analyst/agents/carmem-relator.mdsquads/legal-analyst/agents/cnj-compliance.mdsquads/legal-analyst/agents/datajud-formatter.mdsquads/legal-analyst/agents/fachin-precedent.mdsquads/legal-analyst/agents/fux-procedural.mdsquads/legal-analyst/agents/legal-chief.mdsquads/legal-analyst/agents/marinoni-quality.mdsquads/legal-analyst/agents/mendes-researcher.mdsquads/legal-analyst/agents/moraes-analyst.mdsquads/legal-analyst/agents/nunes-quantitative.mdsquads/legal-analyst/agents/theodoro-validator.mdsquads/legal-analyst/agents/toffoli-aggregator.mdsquads/legal-analyst/agents/weber-indexer.mdsquads/legal-analyst/checklists/classificacao-processual-check.mdsquads/legal-analyst/checklists/cnj-resolucoes-check.mdsquads/legal-analyst/checklists/fundamentacao-quality-check.mdsquads/legal-analyst/checklists/precedente-quality-check.mdsquads/legal-analyst/config.yamlsquads/legal-analyst/data/cnj-resolucoes-reference.mdsquads/legal-analyst/data/datajud-schema-reference.mdsquads/legal-analyst/data/legal-kb.mdsquads/legal-analyst/data/relatores-reference.mdsquads/legal-analyst/data/tpu-classes-reference.mdsquads/legal-analyst/data/tribunais-reference.mdsquads/legal-analyst/scripts/intake.shsquads/legal-analyst/tasks/analisar-precedente.mdsquads/legal-analyst/tasks/analisar-processo-completo.mdsquads/legal-analyst/tasks/classificar-processo.mdsquads/legal-analyst/tasks/consolidar-precedentes.mdsquads/legal-analyst/tasks/estrategia-argumentativa.mdsquads/legal-analyst/tasks/formatar-datajud.mdsquads/legal-analyst/tasks/intake-processo-pdf.mdsquads/legal-analyst/tasks/jurimetria.mdsquads/legal-analyst/tasks/perfil-relator.mdsquads/legal-analyst/tasks/pesquisar-jurisprudencia.mdsquads/legal-analyst/tasks/validar-fundamentacao.mdsquads/legal-analyst/tasks/verificar-admissibilidade.mdsquads/legal-analyst/templates/datajud-output-tmpl.yamlsquads/legal-analyst/templates/fundamentacao-tmpl.yamlsquads/legal-analyst/templates/jurisprudencia-mapa-tmpl.yamlsquads/legal-analyst/templates/precedente-ficha-tmpl.yamlsquads/legal-analyst/templates/processo-analise-tmpl.yamlsquads/legal-analyst/templates/relator-perfil-tmpl.yamlsquads/legal-analyst/webapp/.dockerignoresquads/legal-analyst/webapp/.env.examplesquads/legal-analyst/webapp/backend/Dockerfilesquads/legal-analyst/webapp/backend/agents/__init__.pysquads/legal-analyst/webapp/backend/agents/loader.pysquads/legal-analyst/webapp/backend/core/__init__.pysquads/legal-analyst/webapp/backend/core/agent_engine.pysquads/legal-analyst/webapp/backend/core/chat_manager.pysquads/legal-analyst/webapp/backend/core/config.pysquads/legal-analyst/webapp/backend/core/document_store.pysquads/legal-analyst/webapp/backend/core/models.pysquads/legal-analyst/webapp/backend/core/pdf_processor.pysquads/legal-analyst/webapp/backend/main.pysquads/legal-analyst/webapp/backend/requirements.txtsquads/legal-analyst/webapp/deploy-hostinger.shsquads/legal-analyst/webapp/deploy.shsquads/legal-analyst/webapp/docker-compose.ymlsquads/legal-analyst/webapp/frontend/Dockerfilesquads/legal-analyst/webapp/frontend/index.htmlsquads/legal-analyst/webapp/frontend/nginx.confsquads/legal-analyst/webapp/frontend/package.jsonsquads/legal-analyst/webapp/frontend/postcss.config.jssquads/legal-analyst/webapp/frontend/src/App.tsxsquads/legal-analyst/webapp/frontend/src/components/AgentPanel.tsxsquads/legal-analyst/webapp/frontend/src/components/ChatInterface.tsxsquads/legal-analyst/webapp/frontend/src/components/Header.tsxsquads/legal-analyst/webapp/frontend/src/components/LegalEditor.tsxsquads/legal-analyst/webapp/frontend/src/components/MessageBubble.tsxsquads/legal-analyst/webapp/frontend/src/components/PDFViewer.tsxsquads/legal-analyst/webapp/frontend/src/components/Sidebar.tsxsquads/legal-analyst/webapp/frontend/src/hooks/useAgents.tssquads/legal-analyst/webapp/frontend/src/hooks/useChat.tssquads/legal-analyst/webapp/frontend/src/hooks/usePDF.tssquads/legal-analyst/webapp/frontend/src/index.csssquads/legal-analyst/webapp/frontend/src/main.tsxsquads/legal-analyst/webapp/frontend/src/services/api.tssquads/legal-analyst/webapp/frontend/src/types/index.tssquads/legal-analyst/webapp/frontend/tailwind.config.jssquads/legal-analyst/webapp/frontend/tsconfig.jsonsquads/legal-analyst/webapp/frontend/vite.config.tssquads/legal-analyst/workflows/wf-analise-processual-completa.yamlsquads/legal-analyst/workflows/wf-analise-relator.yamlsquads/legal-analyst/workflows/wf-pesquisa-jurisprudencial.yamlsquads/legal-analyst/workflows/wf-validacao.yaml
| def _load_agent_prompt(agent_id: str) -> str: | ||
| """Load the full agent prompt from its markdown definition file.""" | ||
| agent_file = AGENTS_DIR / f"{agent_id}.md" | ||
| if agent_file.exists(): | ||
| return agent_file.read_text(encoding="utf-8") |
There was a problem hiding this comment.
Reject agent_id values that escape AGENTS_DIR.
agent_id is ultimately caller-controlled via target_agent, but this helper interpolates it straight into a filesystem path. A value like ../../secrets will read arbitrary files into the system prompt and can exfiltrate them to Anthropic.
🔒 Suggested hardening
def _load_agent_prompt(agent_id: str) -> str:
"""Load the full agent prompt from its markdown definition file."""
- agent_file = AGENTS_DIR / f"{agent_id}.md"
+ agent_file = (AGENTS_DIR / f"{agent_id}.md").resolve()
+ if AGENTS_DIR.resolve() not in agent_file.parents:
+ raise ValueError("Invalid agent id")
if agent_file.exists():
return agent_file.read_text(encoding="utf-8")
return ""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/core/agent_engine.py` around lines 37 -
41, The _load_agent_prompt helper currently interpolates agent_id into
AGENTS_DIR allowing path traversal; fix by validating/sanitizing agent_id in
_load_agent_prompt: reject values containing path separators ("/", "\\") or any
".." segments and/or normalize and compute the resolved path (AGENTS_DIR /
f"{agent_id}.md").resolve() and ensure the resolved path startswith
AGENTS_DIR.resolve(); if it does not, raise an exception; keep reading the file
only after this check and handle missing files with a clear error.
| app.add_middleware( | ||
| CORSMiddleware, | ||
| allow_origins=CORS_ORIGINS, | ||
| allow_credentials=True, | ||
| allow_methods=["*"], | ||
| allow_headers=["*"], | ||
| ) | ||
|
|
||
| # Serve clips as static files | ||
| if CLIPS_DIR.exists(): | ||
| app.mount("/clips", StaticFiles(directory=str(CLIPS_DIR)), name="clips") |
There was a problem hiding this comment.
Protect the API and clip assets with real auth/authz.
CORSMiddleware is not access control, and app.mount("/clips", ...) publishes generated artifacts by path alone. The route handlers below are also unguarded, so anyone who can reach the service can read case material and mutate server state; even if you later add FastAPI dependencies, this static mount will still stay public.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/main.py` around lines 35 - 45, The code
currently exposes CLIPS_DIR via app.mount and relies on CORSMiddleware which is
not access control; remove the public StaticFiles mount and replace it with an
authenticated file-serving endpoint that enforces authorization checks before
reading from CLIPS_DIR. Specifically: stop using app.mount("/clips",
StaticFiles(...)) and instead implement a route (e.g., GET /clips/{path:path})
that uses your authentication dependency (or middleware) to validate the caller
and perform authorization (role/ownership) checks, then securely open and stream
files only from CLIPS_DIR (prevent path traversal). Also ensure all existing
route handlers use the same auth dependency or router with enforced dependencies
so no handlers remain unguarded.
| async def create_agent(req: AgentCreationRequest): | ||
| """Create a new agent via skill-based generation.""" | ||
| agent_template = f"""# {req.name} | ||
|
|
||
| ## COMPLETE AGENT DEFINITION | ||
|
|
||
| ```yaml | ||
| agent: | ||
| name: {req.name} | ||
| id: {req.name.lower().replace(' ', '-')} | ||
| title: {req.role} | ||
| icon: "⚖️" | ||
| tier: {req.tier.value} | ||
| squad: legal-analyst | ||
| whenToUse: | | ||
| {req.expertise} | ||
| customization: | | ||
| Agente criado via skill para compor o squad legal-analyst. | ||
|
|
||
| persona: | ||
| role: {req.role} | ||
| style: {req.style} | ||
| identity: | | ||
| Especialista em {req.expertise} | ||
| focus: | | ||
| {req.expertise} | ||
|
|
||
| commands: | ||
| {chr(10).join(f' - name: {cmd}{chr(10)} description: "Comando {cmd}"' for cmd in req.commands) if req.commands else ' - name: analisar' + chr(10) + ' description: "Analise especializada"'} | ||
| ``` | ||
| """ | ||
| from core.config import AGENTS_DIR | ||
| agent_file = AGENTS_DIR / f"{req.name.lower().replace(' ', '-')}.md" | ||
| agent_file.write_text(agent_template, encoding="utf-8") | ||
|
|
There was a problem hiding this comment.
Sanitize req.name before turning it into a filename.
req.name.lower().replace(' ', '-') still allows ../, slashes, and silent overwrites, so this endpoint can escape AGENTS_DIR or clobber an existing agent definition. Generate a strict slug, resolve it under the base directory, and reject duplicates.
🛡️ Suggested hardening
async def create_agent(req: AgentCreationRequest):
"""Create a new agent via skill-based generation."""
+ agent_id = "".join(
+ ch for ch in req.name.lower().replace(" ", "-")
+ if ch.isascii() and (ch.isalnum() or ch == "-")
+ ).strip("-")
+ if not agent_id:
+ raise HTTPException(status_code=400, detail="Nome de agente invalido")
+
agent_template = f"""# {req.name}
@@
- id: {req.name.lower().replace(' ', '-')}
+ id: {agent_id}
@@
- agent_file = AGENTS_DIR / f"{req.name.lower().replace(' ', '-')}.md"
+ agent_file = (AGENTS_DIR / f"{agent_id}.md").resolve()
+ if AGENTS_DIR.resolve() not in agent_file.parents:
+ raise HTTPException(status_code=400, detail="Nome de agente invalido")
+ if agent_file.exists():
+ raise HTTPException(status_code=409, detail="Agente ja existe")
agent_file.write_text(agent_template, encoding="utf-8")
@@
- "agent_id": req.name.lower().replace(" ", "-"),
+ "agent_id": agent_id,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/main.py` around lines 245 - 279, The
create_agent handler writes a file using req.name unsanitized (agent_file =
AGENTS_DIR / f"{req.name.lower().replace(' ', '-')}.md"), which allows
path-traversal and clobbering; replace that with a strict slug generation (e.g.,
allow only [a-z0-9-], convert spaces to '-', strip other chars, limit length) to
produce slug = slugify(req.name) and use agent_path = (AGENTS_DIR /
f"{slug}.md").resolve() then verify
agent_path.is_relative_to(AGENTS_DIR.resolve()) to block escapes, check if
agent_path.exists() and reject duplicates (return error) before writing, and use
the slug (not raw req.name) for id and filename in the template to avoid silent
overwrites and directory traversal.
| pydantic==2.9.0 | ||
| pymupdf==1.24.0 | ||
| pillow==10.4.0 | ||
| python-jose[cryptography]==3.3.0 |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the first patched version of python-jose for GHSA-6c5p-j8vq-pqhj and GHSA-cjwg-qfpm-7377?
💡 Result:
- GHSA-6c5p-j8vq-pqhj (CVE-2024-33663): first patched version is python-jose 3.4.0. [1]
- GHSA-cjwg-qfpm-7377 (CVE-2024-33664): first patched version is python-jose 3.4.0. [2]
Upgrade python-jose from 3.3.0 to 3.4.0 or later before release.
Version 3.3.0 is vulnerable to GHSA-6c5p-j8vq-pqhj and GHSA-cjwg-qfpm-7377 (CVE-2024-33663 and CVE-2024-33664), both patched in 3.4.0. Since this package handles JWT/JWE in the auth flow, this is a critical security issue.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/requirements.txt` at line 7, Update the
vulnerable dependency entry "python-jose[cryptography]==3.3.0" in
requirements.txt to a non-vulnerable release (e.g.
"python-jose[cryptography]>=3.4.0" or pin "==3.4.0"), then regenerate any
dependency lock files (pip-compile / poetry lock / pipenv lock) and run the
project's test suite and auth-related integration tests (JWT/JWE paths) to
ensure no breakage in the auth flow that uses python-jose.
- Fix REPO_URL in deploy-hostinger.sh to point to correct GitHub repo - Add deploy-remote.sh helper script for one-command VPS deployment - Add .env and *.env.local to .gitignore to prevent secret leaks https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
There was a problem hiding this comment.
Actionable comments posted: 9
🧹 Nitpick comments (2)
squads/legal-analyst/webapp/deploy-remote.sh (1)
15-18: Hardcoded VPS IP address reduces portability.The IP
31.97.29.196is hardcoded. Consider making this configurable via environment variable for reusability across different deployments.♻️ Proposed fix to make IP configurable
-VPS_IP="31.97.29.196" -VPS_USER="root" +VPS_IP="${VPS_IP:-31.97.29.196}" +VPS_USER="${VPS_USER:-root}" SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" DEPLOY_SCRIPT="$SCRIPT_DIR/deploy-hostinger.sh"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-remote.sh` around lines 15 - 18, The VPS_IP variable is hardcoded; change it to read from an environment variable with a sensible default so deployments can override it without editing the script — update the VPS_IP assignment (the VPS_IP variable near DEPLOY_SCRIPT and SCRIPT_DIR) to use an env fallback pattern (use the existing variable name VPS_IP) and document or expect callers to export VPS_IP before invoking the script or pass it in the environment when running deploy-remote.sh.squads/legal-analyst/webapp/deploy-hostinger.sh (1)
261-265: Certbot email assumesadmin@mailbox exists.The auto-generated email
admin@${DOMAIN}may not be a valid mailbox. Consider prompting for an email or documenting this assumption.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 261 - 265, The certbot call currently hardcodes --email "admin@${DOMAIN}" which may be invalid; modify the script to accept a supplied email (e.g., an EMAIL env var or prompt input) and use that in the certbot invocation instead of admin@${DOMAIN}; update the certbot command (the line with certbot --nginx -d "$DOMAIN" --email ...) to reference the EMAIL variable (falling back to prompting or exiting with a clear message if not set) and adjust the failure echo messages to mention the expected email variable rather than assuming admin@${DOMAIN}.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Around line 4-5: Update the .gitignore entries to cover all common env file
patterns so secrets don't get committed: replace or augment the existing ".env"
and "*.env.local" entries with broader patterns like ".env.*" and ".env.*.local"
(or simply ".env*" if you prefer) to match .env.development, .env.production,
.env.test and their .local variants; ensure the file contains both the base
".env" pattern and the wildcard ".env.*" / ".env.*.local" patterns to
comprehensively ignore all environment variants.
- Line 5: The pattern '*.env.local' in .gitignore does not match the dotfile
'.env.local'; replace or add an explicit entry '.env.local' (or add a separate
line '.env.local') so the actual hidden env file is ignored. Update the
.gitignore to include the literal '.env.local' entry instead of relying on
'*.env.local'.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh`:
- Around line 68-75: The script currently reads ANTHROPIC_API_KEY with read -r
which echoes the secret; update the interactive prompt in deploy-hostinger.sh to
use read -s (silent) when prompting for ANTHROPIC_API_KEY and optionally print a
newline after input so the prompt formatting remains correct; ensure the
conditional block that checks ANTHROPIC_API_KEY still validates emptiness and
exits with the same error message if unset.
- Around line 160-168: The .env here-doc in deploy-hostinger.sh writes sensitive
ANTHROPIC_API_KEY with default permissions; restrict access by setting a
restrictive umask (e.g., umask 077) before the here-doc or by creating the file
then immediately running chmod 600 .env (or use install -m 600) so only the
owner can read/write. Update the block that writes ".env" (the here-doc that
includes ANTHROPIC_API_KEY, ANTHROPIC_MODEL, API_PORT, UI_PORT) to apply one of
these changes and ensure the permission-setting command runs before any
potential exposure.
- Around line 149-155: The current git pull command (git pull origin "$BRANCH"
|| true) in the branch-update block for APP_DIR silently ignores failures and
can leave stale code; change this to run git pull origin "$BRANCH" without the
“|| true”, capture its exit status, and if non-zero emit a clear warning message
including the branch/APP_DIR and the exit code (and optionally run git status or
show the last git error) and then decide whether to exit non-zero or
prompt/abort deployment; update the block that references APP_DIR, BRANCH and
REPO_URL to implement this check and warning instead of swallowing errors.
- Around line 82-85: The if-block checking if REPO_URL equals
"https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" is dead code
because the default REPO_URL uses "felippepestana"; either remove this entire
conditional prompt block or update the comparison to match the actual default
(e.g., replace SEU_USUARIO with felippepestana) or make the check configurable
(prompt when REPO_URL contains a placeholder pattern). Locate the conditional
that uses REPO_URL (the if [ "$REPO_URL" =
"https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" ]; then ... fi)
and implement one of those fixes so the prompt is reachable or eliminated.
- Around line 270-280: The UFW deny rules won't block Docker-exposed ports
because Docker manipulates iptables; update the docker-compose.yml service port
bindings for the backend and frontend to bind to localhost only (use 127.0.0.1
for the API_PORT and UI_PORT mappings) so Docker only listens on loopback and
Nginx reverse proxy can safely handle external traffic; modify the ports entries
that reference API_PORT and UI_PORT to use "127.0.0.1:..." bindings instead of
host-wide bindings.
In `@squads/legal-analyst/webapp/deploy-remote.sh`:
- Around line 26-33: The script reads ANTHROPIC_API_KEY with read -r which
echoes the API key in plaintext; change the prompt/read to hide input (use a
silent read such as read -s or disable echo around the read) so the value isn't
displayed when typed, and print a newline after the silent read so the terminal
prompt is clean; update the ANTHROPIC_API_KEY read logic in deploy-remote.sh
(the read invocation and its surrounding echo prompts) to use the masked input
method.
- Around line 47-52: The script leaks ANTHROPIC_API_KEY by embedding it in the
SSH command; update deploy-remote.sh to stop passing ANTHROPIC_API_KEY on the
remote command-line (where VPS_USER, VPS_IP and DEPLOY_SCRIPT are used) and
instead transmit the secret securely—either pass the secret via stdin/heredoc
into ssh and export it on the remote side before sourcing/executing the deploy
script, or copy a temporary file with restricted permissions (chmod 600)
containing the variable and source it on the remote host then immediately delete
it; ensure the variable is never present in the ssh command string or visible in
process listings or history.
---
Nitpick comments:
In `@squads/legal-analyst/webapp/deploy-hostinger.sh`:
- Around line 261-265: The certbot call currently hardcodes --email
"admin@${DOMAIN}" which may be invalid; modify the script to accept a supplied
email (e.g., an EMAIL env var or prompt input) and use that in the certbot
invocation instead of admin@${DOMAIN}; update the certbot command (the line with
certbot --nginx -d "$DOMAIN" --email ...) to reference the EMAIL variable
(falling back to prompting or exiting with a clear message if not set) and
adjust the failure echo messages to mention the expected email variable rather
than assuming admin@${DOMAIN}.
In `@squads/legal-analyst/webapp/deploy-remote.sh`:
- Around line 15-18: The VPS_IP variable is hardcoded; change it to read from an
environment variable with a sensible default so deployments can override it
without editing the script — update the VPS_IP assignment (the VPS_IP variable
near DEPLOY_SCRIPT and SCRIPT_DIR) to use an env fallback pattern (use the
existing variable name VPS_IP) and document or expect callers to export VPS_IP
before invoking the script or pass it in the environment when running
deploy-remote.sh.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 61165fa7-976a-4bd8-acf5-89083df7d128
📒 Files selected for processing (3)
.gitignoresquads/legal-analyst/webapp/deploy-hostinger.shsquads/legal-analyst/webapp/deploy-remote.sh
| .env | ||
| *.env.local |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Cover all common environment file patterns.
Both FastAPI and React/Vite support multiple environment file conventions (.env.development, .env.production, .env.test, and their .local variants). To comprehensively prevent secret leaks across development, staging, and production workflows, add all standard patterns.
🛡️ Comprehensive environment file coverage
.env
-*.env.local
+.env.*This pattern covers:
.env.local.env.development,.env.development.local.env.production,.env.production.local.env.test,.env.test.local- Any other
.env.*variants
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .env | |
| *.env.local | |
| .env | |
| .env.* |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 4 - 5, Update the .gitignore entries to cover all
common env file patterns so secrets don't get committed: replace or augment the
existing ".env" and "*.env.local" entries with broader patterns like ".env.*"
and ".env.*.local" (or simply ".env*" if you prefer) to match .env.development,
.env.production, .env.test and their .local variants; ensure the file contains
both the base ".env" pattern and the wildcard ".env.*" / ".env.*.local" patterns
to comprehensively ignore all environment variants.
| __pycache__/ | ||
| *.pyc | ||
| .env | ||
| *.env.local |
There was a problem hiding this comment.
Critical: The pattern *.env.local does not match .env.local.
In gitignore syntax, * does not match leading dots in filenames. The pattern *.env.local will match foo.env.local or development.env.local, but will NOT match .env.local—the most common local environment file. This defeats the security objective stated in your commit message and leaves secrets vulnerable to accidental commits.
🔒 Proposed fix to properly ignore .env.local
.env
-*.env.local
+.env.local
+.env.*.local🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore at line 5, The pattern '*.env.local' in .gitignore does not match
the dotfile '.env.local'; replace or add an explicit entry '.env.local' (or add
a separate line '.env.local') so the actual hidden env file is ignored. Update
the .gitignore to include the literal '.env.local' entry instead of relying on
'*.env.local'.
| if [ -z "$ANTHROPIC_API_KEY" ]; then | ||
| echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" | ||
| read -r ANTHROPIC_API_KEY | ||
| if [ -z "$ANTHROPIC_API_KEY" ]; then | ||
| echo -e "${RED}[x] API key obrigatoria. Obtenha em https://console.anthropic.com/settings/keys${NC}" | ||
| exit 1 | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Sensitive input should be masked.
The read command echoes the API key to the terminal in plaintext. Use -s flag to suppress echo for sensitive credentials.
🔒 Proposed fix to mask API key input
if [ -z "$ANTHROPIC_API_KEY" ]; then
echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}"
- read -r ANTHROPIC_API_KEY
+ read -rs ANTHROPIC_API_KEY
+ echo "" # Add newline after hidden input
if [ -z "$ANTHROPIC_API_KEY" ]; then
echo -e "${RED}[x] API key obrigatoria. Obtenha em https://console.anthropic.com/settings/keys${NC}"
exit 1
fi
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ -z "$ANTHROPIC_API_KEY" ]; then | |
| echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" | |
| read -r ANTHROPIC_API_KEY | |
| if [ -z "$ANTHROPIC_API_KEY" ]; then | |
| echo -e "${RED}[x] API key obrigatoria. Obtenha em https://console.anthropic.com/settings/keys${NC}" | |
| exit 1 | |
| fi | |
| fi | |
| if [ -z "$ANTHROPIC_API_KEY" ]; then | |
| echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" | |
| read -rs ANTHROPIC_API_KEY | |
| echo "" # Add newline after hidden input | |
| if [ -z "$ANTHROPIC_API_KEY" ]; then | |
| echo -e "${RED}[x] API key obrigatoria. Obtenha em https://console.anthropic.com/settings/keys${NC}" | |
| exit 1 | |
| fi | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 68 - 75, The
script currently reads ANTHROPIC_API_KEY with read -r which echoes the secret;
update the interactive prompt in deploy-hostinger.sh to use read -s (silent)
when prompting for ANTHROPIC_API_KEY and optionally print a newline after input
so the prompt formatting remains correct; ensure the conditional block that
checks ANTHROPIC_API_KEY still validates emptiness and exits with the same error
message if unset.
| if [ "$REPO_URL" = "https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" ]; then | ||
| echo -e "${YELLOW}[?] Informe a URL do repositorio Git:${NC}" | ||
| read -r REPO_URL | ||
| fi |
There was a problem hiding this comment.
Dead code: condition never matches.
The default REPO_URL (line 47) uses felippepestana, but this condition checks for SEU_USUARIO. This prompt will never trigger.
🧹 Proposed fix: remove dead code or update condition
-if [ "$REPO_URL" = "https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" ]; then
- echo -e "${YELLOW}[?] Informe a URL do repositorio Git:${NC}"
- read -r REPO_URL
-fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ "$REPO_URL" = "https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" ]; then | |
| echo -e "${YELLOW}[?] Informe a URL do repositorio Git:${NC}" | |
| read -r REPO_URL | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 82 - 85, The
if-block checking if REPO_URL equals
"https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" is dead code
because the default REPO_URL uses "felippepestana"; either remove this entire
conditional prompt block or update the comparison to match the actual default
(e.g., replace SEU_USUARIO with felippepestana) or make the check configurable
(prompt when REPO_URL contains a placeholder pattern). Locate the conditional
that uses REPO_URL (the if [ "$REPO_URL" =
"https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" ]; then ... fi)
and implement one of those fixes so the prompt is reachable or eliminated.
| if [ -d "$APP_DIR" ]; then | ||
| echo -e "${YELLOW} Diretorio $APP_DIR ja existe. Atualizando...${NC}" | ||
| cd "$APP_DIR" | ||
| git pull origin "$BRANCH" || true | ||
| else | ||
| git clone --branch "$BRANCH" "$REPO_URL" "$APP_DIR" | ||
| fi |
There was a problem hiding this comment.
Silent failure on git pull could deploy stale code.
The || true suppresses all errors, including network failures or merge conflicts. Consider at least warning the user if the pull fails.
⚠️ Proposed fix to warn on pull failure
if [ -d "$APP_DIR" ]; then
echo -e "${YELLOW} Diretorio $APP_DIR ja existe. Atualizando...${NC}"
cd "$APP_DIR"
- git pull origin "$BRANCH" || true
+ if ! git pull origin "$BRANCH"; then
+ echo -e "${YELLOW} [!] git pull falhou. Continuando com codigo existente...${NC}"
+ fi
else
git clone --branch "$BRANCH" "$REPO_URL" "$APP_DIR"
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ -d "$APP_DIR" ]; then | |
| echo -e "${YELLOW} Diretorio $APP_DIR ja existe. Atualizando...${NC}" | |
| cd "$APP_DIR" | |
| git pull origin "$BRANCH" || true | |
| else | |
| git clone --branch "$BRANCH" "$REPO_URL" "$APP_DIR" | |
| fi | |
| if [ -d "$APP_DIR" ]; then | |
| echo -e "${YELLOW} Diretorio $APP_DIR ja existe. Atualizando...${NC}" | |
| cd "$APP_DIR" | |
| if ! git pull origin "$BRANCH"; then | |
| echo -e "${YELLOW} [!] git pull falhou. Continuando com codigo existente...${NC}" | |
| fi | |
| else | |
| git clone --branch "$BRANCH" "$REPO_URL" "$APP_DIR" | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 149 - 155, The
current git pull command (git pull origin "$BRANCH" || true) in the
branch-update block for APP_DIR silently ignores failures and can leave stale
code; change this to run git pull origin "$BRANCH" without the “|| true”,
capture its exit status, and if non-zero emit a clear warning message including
the branch/APP_DIR and the exit code (and optionally run git status or show the
last git error) and then decide whether to exit non-zero or prompt/abort
deployment; update the block that references APP_DIR, BRANCH and REPO_URL to
implement this check and warning instead of swallowing errors.
| # Criar .env | ||
| cat > .env <<ENVFILE | ||
| ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} | ||
| ANTHROPIC_MODEL=claude-sonnet-4-20250514 | ||
| API_PORT=${API_PORT} | ||
| UI_PORT=${UI_PORT} | ||
| ENVFILE | ||
|
|
||
| echo -e "${GREEN} .env configurado${NC}" |
There was a problem hiding this comment.
Restrict .env file permissions to prevent credential exposure.
The .env file contains ANTHROPIC_API_KEY but is created with default permissions (typically world-readable). Any user on the system could read the API key.
🔒 Proposed fix to restrict file permissions
# Criar .env
+umask 077
cat > .env <<ENVFILE
ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
ANTHROPIC_MODEL=claude-sonnet-4-20250514
API_PORT=${API_PORT}
UI_PORT=${UI_PORT}
ENVFILE
+chmod 600 .env
echo -e "${GREEN} .env configurado${NC}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Criar .env | |
| cat > .env <<ENVFILE | |
| ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} | |
| ANTHROPIC_MODEL=claude-sonnet-4-20250514 | |
| API_PORT=${API_PORT} | |
| UI_PORT=${UI_PORT} | |
| ENVFILE | |
| echo -e "${GREEN} .env configurado${NC}" | |
| # Criar .env | |
| umask 077 | |
| cat > .env <<ENVFILE | |
| ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} | |
| ANTHROPIC_MODEL=claude-sonnet-4-20250514 | |
| API_PORT=${API_PORT} | |
| UI_PORT=${UI_PORT} | |
| ENVFILE | |
| chmod 600 .env | |
| echo -e "${GREEN} .env configurado${NC}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 160 - 168, The
.env here-doc in deploy-hostinger.sh writes sensitive ANTHROPIC_API_KEY with
default permissions; restrict access by setting a restrictive umask (e.g., umask
077) before the here-doc or by creating the file then immediately running chmod
600 .env (or use install -m 600) so only the owner can read/write. Update the
block that writes ".env" (the here-doc that includes ANTHROPIC_API_KEY,
ANTHROPIC_MODEL, API_PORT, UI_PORT) to apply one of these changes and ensure the
permission-setting command runs before any potential exposure.
| # --------------------------------------------------------------------------- | ||
| # ETAPA 9: Firewall (UFW) | ||
| # --------------------------------------------------------------------------- | ||
| echo -e "${GREEN}[9/9] Configurando firewall...${NC}" | ||
| ufw allow OpenSSH | ||
| ufw allow 'Nginx Full' | ||
| # Nao expor portas internas | ||
| ufw deny "$API_PORT" | ||
| ufw deny "$UI_PORT" | ||
| echo "y" | ufw enable || true | ||
| echo -e "${GREEN} Firewall configurado (SSH + HTTP/HTTPS abertos)${NC}" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check current docker-compose.yml port bindings
cat squads/legal-analyst/webapp/docker-compose.yml | grep -A2 "ports:"Repository: SynkraAI/aiox-squads
Length of output: 184
🏁 Script executed:
# View full docker-compose.yml to understand complete setup
cat squads/legal-analyst/webapp/docker-compose.ymlRepository: SynkraAI/aiox-squads
Length of output: 1217
🏁 Script executed:
# Check if Nginx is configured in the deployment script
grep -n "nginx\|Nginx\|NGINX" squads/legal-analyst/webapp/deploy-hostinger.shRepository: SynkraAI/aiox-squads
Length of output: 1043
🏁 Script executed:
# Check what comes before the firewall section to understand the full architecture
sed -n '1,270p' squads/legal-analyst/webapp/deploy-hostinger.sh | tail -50Repository: SynkraAI/aiox-squads
Length of output: 1678
🏁 Script executed:
# Search for references to API_PORT and UI_PORT outside docker-compose
rg "API_PORT|UI_PORT" squads/legal-analyst/ --type sh --type js --type ts --type py -B2 -A2Repository: SynkraAI/aiox-squads
Length of output: 4597
🏁 Script executed:
# Check if any environment config or application code directly references these ports
grep -r "8000\|3000" squads/legal-analyst/webapp/ --include="*.js" --include="*.ts" --include="*.py" 2>/dev/null | head -20Repository: SynkraAI/aiox-squads
Length of output: 441
Bind Docker ports to localhost-only in docker-compose.yml.
Docker manipulates iptables directly, bypassing UFW. The ufw deny commands on lines 277-278 will not prevent external access to ports 8000 and 3000 if Docker exposes them. Since the Nginx reverse proxy (already configured) proxies only from localhost, restrict Docker port bindings to 127.0.0.1:
Recommended fix in docker-compose.yml
# In docker-compose.yml, change port bindings to localhost-only:
ports:
- "127.0.0.1:${API_PORT:-8000}:8000" # backend
- "127.0.0.1:${UI_PORT:-3000}:80" # frontend🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 270 - 280, The
UFW deny rules won't block Docker-exposed ports because Docker manipulates
iptables; update the docker-compose.yml service port bindings for the backend
and frontend to bind to localhost only (use 127.0.0.1 for the API_PORT and
UI_PORT mappings) so Docker only listens on loopback and Nginx reverse proxy can
safely handle external traffic; modify the ports entries that reference API_PORT
and UI_PORT to use "127.0.0.1:..." bindings instead of host-wide bindings.
| if [ -z "${ANTHROPIC_API_KEY:-}" ]; then | ||
| echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" | ||
| read -r ANTHROPIC_API_KEY | ||
| if [ -z "$ANTHROPIC_API_KEY" ]; then | ||
| echo -e "${RED}[x] API key obrigatoria.${NC}" | ||
| exit 1 | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Sensitive input should be masked.
Same issue as deploy-hostinger.sh - the API key is echoed in plaintext during input.
🔒 Proposed fix
if [ -z "${ANTHROPIC_API_KEY:-}" ]; then
echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}"
- read -r ANTHROPIC_API_KEY
+ read -rs ANTHROPIC_API_KEY
+ echo ""
if [ -z "$ANTHROPIC_API_KEY" ]; then
echo -e "${RED}[x] API key obrigatoria.${NC}"
exit 1
fi
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [ -z "${ANTHROPIC_API_KEY:-}" ]; then | |
| echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" | |
| read -r ANTHROPIC_API_KEY | |
| if [ -z "$ANTHROPIC_API_KEY" ]; then | |
| echo -e "${RED}[x] API key obrigatoria.${NC}" | |
| exit 1 | |
| fi | |
| fi | |
| if [ -z "${ANTHROPIC_API_KEY:-}" ]; then | |
| echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" | |
| read -rs ANTHROPIC_API_KEY | |
| echo "" | |
| if [ -z "$ANTHROPIC_API_KEY" ]; then | |
| echo -e "${RED}[x] API key obrigatoria.${NC}" | |
| exit 1 | |
| fi | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-remote.sh` around lines 26 - 33, The
script reads ANTHROPIC_API_KEY with read -r which echoes the API key in
plaintext; change the prompt/read to hide input (use a silent read such as read
-s or disable echo around the read) so the value isn't displayed when typed, and
print a newline after the silent read so the terminal prompt is clean; update
the ANTHROPIC_API_KEY read logic in deploy-remote.sh (the read invocation and
its surrounding echo prompts) to use the masked input method.
| # Executar deploy via SSH | ||
| echo -e "${GREEN}[1/1] Executando deploy na VPS...${NC}" | ||
| ssh "${VPS_USER}@${VPS_IP}" \ | ||
| "ANTHROPIC_API_KEY='${ANTHROPIC_API_KEY}' \ | ||
| REPO_URL='https://github.com/felippepestana/aiox-squads-FelippePestana.git' \ | ||
| bash -s" < "$DEPLOY_SCRIPT" |
There was a problem hiding this comment.
API key exposed in SSH command line.
Passing ANTHROPIC_API_KEY directly in the SSH command string can expose it in:
- Process listings (
ps aux) - Shell history on both local and remote systems
- SSH logs
Consider passing credentials via stdin or a temporary file with restricted permissions instead of command-line arguments.
🔒 Proposed fix using heredoc for safer credential passing
# Executar deploy via SSH
echo -e "${GREEN}[1/1] Executando deploy na VPS...${NC}"
-ssh "${VPS_USER}@${VPS_IP}" \
- "ANTHROPIC_API_KEY='${ANTHROPIC_API_KEY}' \
- REPO_URL='https://github.com/felippepestana/aiox-squads-FelippePestana.git' \
- bash -s" < "$DEPLOY_SCRIPT"
+ssh "${VPS_USER}@${VPS_IP}" 'bash -s' <<EOF
+export ANTHROPIC_API_KEY='${ANTHROPIC_API_KEY}'
+export REPO_URL='https://github.com/felippepestana/aiox-squads-FelippePestana.git'
+$(cat "$DEPLOY_SCRIPT")
+EOF📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Executar deploy via SSH | |
| echo -e "${GREEN}[1/1] Executando deploy na VPS...${NC}" | |
| ssh "${VPS_USER}@${VPS_IP}" \ | |
| "ANTHROPIC_API_KEY='${ANTHROPIC_API_KEY}' \ | |
| REPO_URL='https://github.com/felippepestana/aiox-squads-FelippePestana.git' \ | |
| bash -s" < "$DEPLOY_SCRIPT" | |
| # Executar deploy via SSH | |
| echo -e "${GREEN}[1/1] Executando deploy na VPS...${NC}" | |
| ssh "${VPS_USER}@${VPS_IP}" 'bash -s' <<EOF | |
| export ANTHROPIC_API_KEY='${ANTHROPIC_API_KEY}' | |
| export REPO_URL='https://github.com/felippepestana/aiox-squads-FelippePestana.git' | |
| $(cat "$DEPLOY_SCRIPT") | |
| EOF |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-remote.sh` around lines 47 - 52, The
script leaks ANTHROPIC_API_KEY by embedding it in the SSH command; update
deploy-remote.sh to stop passing ANTHROPIC_API_KEY on the remote command-line
(where VPS_USER, VPS_IP and DEPLOY_SCRIPT are used) and instead transmit the
secret securely—either pass the secret via stdin/heredoc into ssh and export it
on the remote side before sourcing/executing the deploy script, or copy a
temporary file with restricted permissions (chmod 600) containing the variable
and source it on the remote host then immediately delete it; ensure the variable
is never present in the ssh command string or visible in process listings or
history.
Configures hostinger-api-mcp for VPS management via Claude Code. https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
Prevents Hostinger API token from being exposed in the repository. https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
- Add stripe_service.py with checkout, subscriptions, webhooks, and access control - Add VSL landing page (VSLPage.tsx) with hero, pricing, FAQ, testimonials - Add Stripe API routes to FastAPI backend (checkout, plans, webhook, validation) - Update App.tsx to show VSL by default, /app path for authenticated users - Add stripe dependency to requirements.txt - Update docker-compose.yml and deploy-hostinger.sh with Stripe env vars - Update .env.example with Stripe configuration template https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (5)
squads/legal-analyst/webapp/deploy-hostinger.sh (5)
82-85:⚠️ Potential issue | 🟡 MinorRemove unreachable placeholder repo prompt.
Lines 82-85 compare against
SEU_USUARIO, but the default on Line 47 isfelippepestana; this branch never executes.🧹 Suggested fix
-if [ "$REPO_URL" = "https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" ]; then - echo -e "${YELLOW}[?] Informe a URL do repositorio Git:${NC}" - read -r REPO_URL -fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 82 - 85, The conditional checking REPO_URL against the placeholder "https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" is unreachable because the script sets a different default ("felippepestana"); remove or update this branch by either deleting the if-block (lines testing that placeholder) or change the comparison to match the actual default/expected placeholder value; specifically modify the conditional that references REPO_URL and the prompt/ read -r REPO_URL flow so that it executes only for a valid placeholder or is removed entirely.
68-70:⚠️ Potential issue | 🟡 MinorMask secret input when reading API key.
On Line 70,
read -rechoes the key in plaintext. Use silent input forANTHROPIC_API_KEY.🔒 Suggested fix
if [ -z "$ANTHROPIC_API_KEY" ]; then echo -e "${YELLOW}[?] Informe sua ANTHROPIC_API_KEY:${NC}" - read -r ANTHROPIC_API_KEY + read -rs ANTHROPIC_API_KEY + echo "" if [ -z "$ANTHROPIC_API_KEY" ]; then echo -e "${RED}[x] API key obrigatoria. Obtenha em https://console.anthropic.com/settings/keys${NC}" exit 1 fi fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 68 - 70, The script currently reads the ANTHROPIC_API_KEY with `read -r` which echoes the secret; change the input to silent mode (use `read -s` / `read -rs` or equivalent) when reading ANTHROPIC_API_KEY so the key is not printed, and ensure you still provide a prompt and a trailing newline after the silent read; update the read call that sets ANTHROPIC_API_KEY and adjust any prompt/echo handling accordingly.
290-293:⚠️ Potential issue | 🟠 MajorUFW denies can be bypassed by Docker-published ports.
Lines 291-292 alone don’t guarantee isolation when Docker publishes on
0.0.0.0. Ensure compose mappings for API/UI are bound to loopback (127.0.0.1:...) so traffic must pass through Nginx.#!/bin/bash set -euo pipefail # Verify whether compose ports are bound to all interfaces or localhost-only fd -a "docker-compose.yml$" | while read -r file; do echo "=== $file ===" rg -n -A4 -B2 'ports:|API_PORT|UI_PORT|8000|3000|127\.0\.0\.1' "$file" || true done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 290 - 293, The UFW deny rules for "$API_PORT" and "$UI_PORT" are insufficient because Docker can publish ports on 0.0.0.0 and bypass UFW; update the deployment to ensure docker-compose port mappings for the services that use API_PORT and UI_PORT are bound to localhost (127.0.0.1:host:container) so traffic must go through Nginx, and add a verification step (or CI check) that scans docker-compose files for unsecured "ports:" entries exposing 0.0.0.0; specifically inspect the compose service definitions that reference API_PORT and UI_PORT and change their "ports" mappings to use 127.0.0.1, and optionally document this requirement near the ufw commands in deploy-hostinger.sh.
160-173:⚠️ Potential issue | 🟠 MajorHarden
.envpermissions before writing secrets.Lines 161-171 write multiple credentials (
ANTHROPIC_API_KEY, Stripe secrets) with default file perms. Restrict access to owner only.🔐 Suggested fix
# Criar .env +umask 077 cat > .env <<ENVFILE ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} ANTHROPIC_MODEL=claude-sonnet-4-20250514 API_PORT=${API_PORT} UI_PORT=${UI_PORT} STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY:-} STRIPE_PUBLISHABLE_KEY=${STRIPE_PUBLISHABLE_KEY:-} STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_SECRET:-} STRIPE_PRICE_ID=${STRIPE_PRICE_ID:-} APP_URL=${APP_URL:-http://${PUBLIC_IP:-localhost}} ENVFILE +chmod 600 .env🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 160 - 173, The .env here-doc that writes secrets (the cat > .env <<ENVFILE ... ENVFILE block) creates the file with default permissions; change it to create the file with owner-only access by setting a restrictive umask (e.g., umask 077) before the here-doc and restoring afterwards, or write the file and immediately run chmod 600 on .env; update the deploy-hostinger.sh .env creation block to use one of these approaches so secrets are readable only by the owner.
149-153:⚠️ Potential issue | 🟡 MinorDon’t silently ignore
git pullfailures.Line 152 currently hides fetch/merge/network errors and can deploy stale code without warning.
⚠️ Suggested fixif [ -d "$APP_DIR" ]; then echo -e "${YELLOW} Diretorio $APP_DIR ja existe. Atualizando...${NC}" cd "$APP_DIR" - git pull origin "$BRANCH" || true + if ! git pull origin "$BRANCH"; then + echo -e "${YELLOW} [!] git pull falhou em $APP_DIR (branch: $BRANCH). Abortando deploy.${NC}" + exit 1 + fi else git clone --branch "$BRANCH" "$REPO_URL" "$APP_DIR" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 149 - 153, The script currently ignores git pull failures by using "git pull origin \"$BRANCH\" || true", which can leave stale code deployed; replace this with a guarded pull that checks the command exit status (or uses set -e) and on non-zero prints a clear error including $APP_DIR and $BRANCH via the existing logging colors and exits with non-zero status; update the block around APP_DIR (the cd "$APP_DIR" and git pull origin "$BRANCH" invocation) to capture the pull's exit code (e.g., run git pull ...; rc=$?; if [ $rc -ne 0 ]; then echo -e "${RED}Git pull failed in $APP_DIR for branch $BRANCH (exit $rc)${NC}"; exit $rc; fi) so failures are visible and stop the deployment.
🧹 Nitpick comments (8)
squads/legal-analyst/webapp/backend/main.py (3)
128-128: Unused variablepagesfrom document processing.The return value is unpacked but
pagesis never used.♻️ Suggested fix
- metadata, pages = document_store.add_document(filepath) + metadata, _pages = document_store.add_document(filepath)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/main.py` at line 128, The variable pages returned from document_store.add_document(filepath) is unpacked but never used; update the call in main.py to either discard the unused value by assigning it to _ (e.g., metadata, _ = document_store.add_document(...)) or only capture metadata by calling a single-variable assignment (e.g., metadata = document_store.add_document(...)) depending on add_document's return signature, ensuring you reference the document_store.add_document(filepath) call and the metadata variable in the fix.
401-403: Development server binds to all interfaces.Binding to
0.0.0.0is appropriate for containerized deployments but exposes the service on all network interfaces during local development.Consider using
127.0.0.1for development or making the host configurable:if __name__ == "__main__": import uvicorn - uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) + import os + uvicorn.run("main:app", host=os.getenv("HOST", "127.0.0.1"), port=8000, reload=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/main.py` around lines 401 - 403, The development entrypoint in the __main__ block calls uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) which binds to all interfaces; change this to bind to localhost or make it configurable. Update the uvicorn.run call in the __main__ section (the block containing uvicorn.run and the "main:app" string) to use host="127.0.0.1" for local development or read the host from an environment variable/config (e.g., UVICORN_HOST) and fall back to "127.0.0.1" so containerized deployments can override it without exposing all interfaces by default.
348-356: Improve exception handling with chaining and specific types.Catching bare
Exceptionand re-raising withoutfromloses the original traceback. Consider catchingstripe.error.StripeErrorspecifically.♻️ Suggested improvement
`@app.post`("/api/stripe/checkout") async def stripe_checkout(req: CheckoutRequest): try: result = create_checkout_session(req) return result.model_dump() except ValueError as e: - raise HTTPException(status_code=400, detail=str(e)) + raise HTTPException(status_code=400, detail=str(e)) from e - except Exception as e: - raise HTTPException(status_code=500, detail=f"Stripe error: {str(e)}") + except stripe.error.StripeError as e: + raise HTTPException(status_code=500, detail=f"Stripe error: {e!s}") from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/backend/main.py` around lines 348 - 356, The endpoint stripe_checkout currently catches a bare Exception and re-raises without chaining which loses the original traceback; update the except blocks to catch stripe.error.StripeError (or other specific Stripe exceptions) instead of Exception and re-raise HTTPException using exception chaining (raise HTTPException(...) from e) so the original traceback is preserved; locate the stripe_checkout function and the call to create_checkout_session and replace the generic except Exception as e handler with except stripe.error.StripeError as e (and keep the existing ValueError handler), and ensure all re-raised HTTPException use "from e".squads/legal-analyst/webapp/docker-compose.yml (2)
29-33: Health check command could fail if Python isn't in PATH.The health check relies on
pythonbeing available and correctly configured in the container. Consider usingcurlorwgetif available, as they're more commonly included in minimal images.♻️ Alternative using curl
healthcheck: - test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/api/health')"] + test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"] interval: 30s timeout: 10s retries: 3Note: Ensure
curlis installed in the backend Dockerfile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/docker-compose.yml` around lines 29 - 33, The healthcheck's test currently invokes "python" which may not exist in minimal images; update the healthcheck block's test command (the healthcheck: test: entry) to use a more commonly available tool such as curl or wget (e.g., call curl -f http://localhost:8000/api/health || exit 1) and ensure the backend Dockerfile installs curl/wget so the binary is present at runtime; modify the healthcheck test string accordingly in the docker-compose.yml and add installation of curl/wget in the Dockerfile used to build the service.
24-28: Volume mount paths are functional but could be clearer.The mounts correctly resolve to the expected locations:
../agents:/app/../agents:roplaces files at/agentsin the container, which aligns withconfig.py's path resolution (SQUAD_ROOT =/in the container, so AGENTS_DIR =/agents). However, the syntax/app/../agentsis unnecessarily confusing and makes the intent harder to read.♻️ Clearer mount paths
volumes: - uploads_data:/app/uploads - clips_data:/app/clips # Mount squad agents/data for live reading - - ../agents:/app/../agents:ro - - ../data:/app/../data:ro - - ../templates:/app/../templates:ro - - ../workflows:/app/../workflows:ro - - ../checklists:/app/../checklists:ro + - ../agents:/agents:ro + - ../data:/data:ro + - ../templates:/templates:ro + - ../workflows:/workflows:ro + - ../checklists:/checklists:ro🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/docker-compose.yml` around lines 24 - 28, Replace the confusing container-side paths like /app/../agents in docker-compose.yml with explicit, readable container targets that match config.py's resolution (SQUAD_ROOT and AGENTS_DIR); e.g., map host ../agents to the container path used by AGENTS_DIR (and do the same for ../data, ../templates, ../workflows, ../checklists) so mounts read like ../agents:/agents:ro, ../data:/data:ro, ../templates:/templates:ro, ../workflows:/workflows:ro, ../checklists:/checklists:ro and clearly reflect SQUAD_ROOT/AGENTS_DIR usage.squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx (3)
41-46: Silent error swallowing hides API failures.The empty
.catch(() => {})discards any error information. Consider logging to console in development or setting an error state for user feedback.♻️ Suggested improvement
useEffect(() => { fetch(`${API_BASE}/stripe/plans`) .then((r) => r.json()) .then(setPlans) - .catch(() => {}); + .catch((err) => { + console.error("Failed to fetch plans:", err); + // Fallback plans are already rendered when plans is empty + }); }, []);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx` around lines 41 - 46, The current useEffect that calls fetch(`${API_BASE}/stripe/plans`) swallows errors with .catch(() => {}), losing failure visibility; update the fetch error handling in the useEffect (the promise chain that ends with .then(setPlans)) to at minimum log the caught error (e.g., console.error) and/or set an error state (e.g., setPlansError) so the UI can show feedback; ensure you reference the existing useEffect, fetch call, API_BASE constant, and setPlans setter when adding the logging/state update and remove the silent empty catch.
52-69: Add email format validation before checkout.The email is sent directly to the checkout endpoint without format validation. This could result in poor UX if users submit invalid emails.
♻️ Suggested improvement
+ const isValidEmail = (email: string) => /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email); + const handleCheckout = async (planKey: string) => { + if (!isValidEmail(email)) { + alert("Por favor, insira um email valido."); + return; + } setLoading(planKey); try {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx` around lines 52 - 69, In handleCheckout, validate the email format before calling the checkout API: check the email variable (used in the POST body) with a concise regex or email validator at the top of handleCheckout, and if invalid prevent the fetch, call setLoading(null) and show a user-facing error (e.g., alert or set an error state) instead of proceeding; keep the existing try/catch/finally flow and ensure references to handleCheckout, setLoading and email are used to locate and update the logic.
307-314: AvoiddangerouslySetInnerHTMLeven for static HTML entities.Although the icon strings are static and contain only HTML entities,
dangerouslySetInnerHTMLbypasses React's XSS protection. Consider using Unicode characters or emoji directly, or a library likehtml-entitiesto decode at build time.♻️ Suggested fix using Unicode directly
{[ - { icon: "⚖", name: "Barbosa", role: "Classificacao TPU" }, - { icon: "⚗", name: "Fux", role: "Admissibilidade CPC" }, + { icon: "⚖", name: "Barbosa", role: "Classificacao TPU" }, + { icon: "⚗", name: "Fux", role: "Admissibilidade CPC" }, // ... update all icons to use Unicode characters ].map((agent) => ( <div key={agent.name} className="p-4 rounded-xl bg-white/[0.03] border border-white/5 text-center hover:border-amber-500/20 transition" > - <div - className="text-2xl mb-2" - dangerouslySetInnerHTML={{ __html: agent.icon }} - /> + <div className="text-2xl mb-2">{agent.icon}</div> <div className="font-semibold text-sm">{agent.name}</div>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx` around lines 307 - 314, The JSX currently uses dangerouslySetInnerHTML to render agent.icon inside the agents mapping in VSLPage.tsx which bypasses React XSS protections; instead remove dangerouslySetInnerHTML and render a plain text node by decoding HTML entities at build-time or runtime (e.g., decode agent.icon using a safe decoder like html-entities' decode or, better, store the icon as a Unicode/emoji string in the agents data). Update the agents rendering block that references agent.icon to use the decoded string (or direct Unicode) inside the div (keep className="text-2xl mb-2") so no innerHTML is used; ensure any import of a decoder (e.g., decode from 'html-entities') is added at the top of VSLPage.tsx if you choose runtime decoding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@squads/legal-analyst/webapp/backend/core/stripe_service.py`:
- Around line 203-207: The webhook handler for event_type
"checkout.session.completed" calls _grant_access which generates a token but
never delivers it to the user; update the flow so the token is retrievable by
the client after checkout—either append the generated token to the checkout
success redirect URL, or persist the token server-side keyed by customer email
(data.get("customer_email")) and implement a lookup endpoint the frontend
(App.tsx) can call on /app to fetch and store the token, ensuring _grant_access
returns the token (or its ID) so it can be included in the response/redirect.
- Around line 100-101: The in-memory _access_tokens: dict[str, AccessToken]
loses all tokens on restart; change it to a persistent store (e.g., Redis or a
file-backed DB) by replacing the in-memory map with a persistence-backed
accessor layer: create functions get_access_token(key), set_access_token(key,
AccessToken), delete_access_token(key) that serialize/deserialize the
AccessToken objects and use a Redis client (or file/DB) under the hood, update
all places that reference _access_tokens to call these accessors (retain the
AccessToken type and serialization format), and add retry/expiry handling and
basic error logging so token reads/writes degrade gracefully.
- Around line 194-198: The code currently falls back to
stripe.Event.construct_from when STRIPE_WEBHOOK_SECRET is empty, bypassing
signature verification; change this to fail closed: in the webhook handler check
STRIPE_WEBHOOK_SECRET and if it's missing or empty raise an explicit error/abort
the request (e.g., throw an exception or return a 4xx/5xx response) instead of
calling stripe.Event.construct_from, and ensure only
stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET) is
used for verified parsing; reference STRIPE_WEBHOOK_SECRET,
stripe.Webhook.construct_event, stripe.Event.construct_from, payload and
sig_header when updating the logic.
In `@squads/legal-analyst/webapp/backend/main.py`:
- Around line 379-386: The endpoints stripe_subscription and stripe_cancel
currently accept an email path param which exposes PII and permits
unauthenticated enumeration/cancellations; change them to require authenticated
identity (e.g. a dependency injection like get_current_user or session) and stop
taking email in the URL: fetch the caller's email from the authenticated
principal or session, verify ownership (or admin scope) before returning
subscription data or performing cancellation, and return only minimal,
non-sensitive fields. Update stripe_subscription(email: str) ->
stripe_subscription(current_user) and stripe_cancel(email: str) ->
stripe_cancel(current_user) (or equivalent auth dependency), add an
authorization check in cancel_subscription to ensure current_user.email matches
the subscription owner (or user has admin role), and remove or redact any PII
from responses.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh`:
- Around line 160-171: The APP_URL entry in the .env here-doc is using
${PUBLIC_IP:-localhost} before PUBLIC_IP is assigned, causing APP_URL to default
to localhost; move or recompute APP_URL after PUBLIC_IP is set (or change the
here-doc generation to insert an APP_URL value derived from the same logic that
assigns PUBLIC_IP). Specifically, update the .env creation (the here-doc that
writes APP_URL) so it runs after the script sets PUBLIC_IP (or replace the
inline expansion with a later assignment like APP_URL="http://${PUBLIC_IP}" once
PUBLIC_IP is available) to ensure external callbacks/redirects use the actual
public IP.
In `@squads/legal-analyst/webapp/frontend/src/App.tsx`:
- Around line 132-137: The page-check currently uses a truthy test (if (page))
which will skip valid page 0; update the conditional in the handler that looks
up pdf.documents and calls pdf.setActiveDoc/pdf.setActivePage (the anonymous
callback taking (docId: string, page?: number)) to use an explicit undefined
check (page !== undefined) before calling pdf.setActivePage so boundary page 0
is preserved; keep the surrounding calls to pdf.setActiveDoc and setActiveView
intact.
- Around line 19-23: The current useState initializer for showVSL (and related
logic around window.location.search and pathname) relies solely on client-side
URL checks and must be replaced with a server-validated decision: remove the
URL-only gating and instead call a backend access validation endpoint (e.g., an
auth/subscription check) from a useEffect on mount (use the existing
showVSL/setShowVSL symbols), await the response, and only setShowVSL(false) when
the backend confirms access; still respect the explicit ?success=true query or
other UI flags but only as secondary hints after the server returns success, and
ensure errors/defaults keep showVSL true to avoid accidental exposure.
In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx`:
- Around line 508-517: The fallback static plan buttons currently call
scrollToPricing which prevents checkout; update the button's onClick in the
fallback rendering (where plan is used and the class conditional on
plan.popular) to call handleCheckout with the appropriate plan identifier/object
instead of scrollToPricing so clicking a fallback plan initiates the
subscription flow; ensure you pass the same argument shape that the existing
handleCheckout handler expects (e.g., plan or plan.id).
- Around line 54-63: The fetch response from the checkout endpoint is parsed
without checking HTTP status; update the checkout flow in VSLPage.tsx (the try
block using fetch, variables res and data) to first check res.ok and handle
non-2xx responses: if !res.ok, parse the JSON/error body (or read text) to
extract an error message and surface it (throw or call setError/log) including
res.status and the message, otherwise continue to parse data and redirect using
data.checkout_url; ensure you still reference planKey and email when building
the request and fail fast on bad responses to avoid silently processing error
payloads.
---
Duplicate comments:
In `@squads/legal-analyst/webapp/deploy-hostinger.sh`:
- Around line 82-85: The conditional checking REPO_URL against the placeholder
"https://github.com/SEU_USUARIO/aiox-squads-FelippePestana.git" is unreachable
because the script sets a different default ("felippepestana"); remove or update
this branch by either deleting the if-block (lines testing that placeholder) or
change the comparison to match the actual default/expected placeholder value;
specifically modify the conditional that references REPO_URL and the prompt/
read -r REPO_URL flow so that it executes only for a valid placeholder or is
removed entirely.
- Around line 68-70: The script currently reads the ANTHROPIC_API_KEY with `read
-r` which echoes the secret; change the input to silent mode (use `read -s` /
`read -rs` or equivalent) when reading ANTHROPIC_API_KEY so the key is not
printed, and ensure you still provide a prompt and a trailing newline after the
silent read; update the read call that sets ANTHROPIC_API_KEY and adjust any
prompt/echo handling accordingly.
- Around line 290-293: The UFW deny rules for "$API_PORT" and "$UI_PORT" are
insufficient because Docker can publish ports on 0.0.0.0 and bypass UFW; update
the deployment to ensure docker-compose port mappings for the services that use
API_PORT and UI_PORT are bound to localhost (127.0.0.1:host:container) so
traffic must go through Nginx, and add a verification step (or CI check) that
scans docker-compose files for unsecured "ports:" entries exposing 0.0.0.0;
specifically inspect the compose service definitions that reference API_PORT and
UI_PORT and change their "ports" mappings to use 127.0.0.1, and optionally
document this requirement near the ufw commands in deploy-hostinger.sh.
- Around line 160-173: The .env here-doc that writes secrets (the cat > .env
<<ENVFILE ... ENVFILE block) creates the file with default permissions; change
it to create the file with owner-only access by setting a restrictive umask
(e.g., umask 077) before the here-doc and restoring afterwards, or write the
file and immediately run chmod 600 on .env; update the deploy-hostinger.sh .env
creation block to use one of these approaches so secrets are readable only by
the owner.
- Around line 149-153: The script currently ignores git pull failures by using
"git pull origin \"$BRANCH\" || true", which can leave stale code deployed;
replace this with a guarded pull that checks the command exit status (or uses
set -e) and on non-zero prints a clear error including $APP_DIR and $BRANCH via
the existing logging colors and exits with non-zero status; update the block
around APP_DIR (the cd "$APP_DIR" and git pull origin "$BRANCH" invocation) to
capture the pull's exit code (e.g., run git pull ...; rc=$?; if [ $rc -ne 0 ];
then echo -e "${RED}Git pull failed in $APP_DIR for branch $BRANCH (exit
$rc)${NC}"; exit $rc; fi) so failures are visible and stop the deployment.
---
Nitpick comments:
In `@squads/legal-analyst/webapp/backend/main.py`:
- Line 128: The variable pages returned from
document_store.add_document(filepath) is unpacked but never used; update the
call in main.py to either discard the unused value by assigning it to _ (e.g.,
metadata, _ = document_store.add_document(...)) or only capture metadata by
calling a single-variable assignment (e.g., metadata =
document_store.add_document(...)) depending on add_document's return signature,
ensuring you reference the document_store.add_document(filepath) call and the
metadata variable in the fix.
- Around line 401-403: The development entrypoint in the __main__ block calls
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True) which binds to
all interfaces; change this to bind to localhost or make it configurable. Update
the uvicorn.run call in the __main__ section (the block containing uvicorn.run
and the "main:app" string) to use host="127.0.0.1" for local development or read
the host from an environment variable/config (e.g., UVICORN_HOST) and fall back
to "127.0.0.1" so containerized deployments can override it without exposing all
interfaces by default.
- Around line 348-356: The endpoint stripe_checkout currently catches a bare
Exception and re-raises without chaining which loses the original traceback;
update the except blocks to catch stripe.error.StripeError (or other specific
Stripe exceptions) instead of Exception and re-raise HTTPException using
exception chaining (raise HTTPException(...) from e) so the original traceback
is preserved; locate the stripe_checkout function and the call to
create_checkout_session and replace the generic except Exception as e handler
with except stripe.error.StripeError as e (and keep the existing ValueError
handler), and ensure all re-raised HTTPException use "from e".
In `@squads/legal-analyst/webapp/docker-compose.yml`:
- Around line 29-33: The healthcheck's test currently invokes "python" which may
not exist in minimal images; update the healthcheck block's test command (the
healthcheck: test: entry) to use a more commonly available tool such as curl or
wget (e.g., call curl -f http://localhost:8000/api/health || exit 1) and ensure
the backend Dockerfile installs curl/wget so the binary is present at runtime;
modify the healthcheck test string accordingly in the docker-compose.yml and add
installation of curl/wget in the Dockerfile used to build the service.
- Around line 24-28: Replace the confusing container-side paths like
/app/../agents in docker-compose.yml with explicit, readable container targets
that match config.py's resolution (SQUAD_ROOT and AGENTS_DIR); e.g., map host
../agents to the container path used by AGENTS_DIR (and do the same for ../data,
../templates, ../workflows, ../checklists) so mounts read like
../agents:/agents:ro, ../data:/data:ro, ../templates:/templates:ro,
../workflows:/workflows:ro, ../checklists:/checklists:ro and clearly reflect
SQUAD_ROOT/AGENTS_DIR usage.
In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx`:
- Around line 41-46: The current useEffect that calls
fetch(`${API_BASE}/stripe/plans`) swallows errors with .catch(() => {}), losing
failure visibility; update the fetch error handling in the useEffect (the
promise chain that ends with .then(setPlans)) to at minimum log the caught error
(e.g., console.error) and/or set an error state (e.g., setPlansError) so the UI
can show feedback; ensure you reference the existing useEffect, fetch call,
API_BASE constant, and setPlans setter when adding the logging/state update and
remove the silent empty catch.
- Around line 52-69: In handleCheckout, validate the email format before calling
the checkout API: check the email variable (used in the POST body) with a
concise regex or email validator at the top of handleCheckout, and if invalid
prevent the fetch, call setLoading(null) and show a user-facing error (e.g.,
alert or set an error state) instead of proceeding; keep the existing
try/catch/finally flow and ensure references to handleCheckout, setLoading and
email are used to locate and update the logic.
- Around line 307-314: The JSX currently uses dangerouslySetInnerHTML to render
agent.icon inside the agents mapping in VSLPage.tsx which bypasses React XSS
protections; instead remove dangerouslySetInnerHTML and render a plain text node
by decoding HTML entities at build-time or runtime (e.g., decode agent.icon
using a safe decoder like html-entities' decode or, better, store the icon as a
Unicode/emoji string in the agents data). Update the agents rendering block that
references agent.icon to use the decoded string (or direct Unicode) inside the
div (keep className="text-2xl mb-2") so no innerHTML is used; ensure any import
of a decoder (e.g., decode from 'html-entities') is added at the top of
VSLPage.tsx if you choose runtime decoding.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 632d256e-1e23-4004-b250-d387e8e113f1
📒 Files selected for processing (9)
squads/legal-analyst/webapp/.env.examplesquads/legal-analyst/webapp/backend/core/config.pysquads/legal-analyst/webapp/backend/core/stripe_service.pysquads/legal-analyst/webapp/backend/main.pysquads/legal-analyst/webapp/backend/requirements.txtsquads/legal-analyst/webapp/deploy-hostinger.shsquads/legal-analyst/webapp/docker-compose.ymlsquads/legal-analyst/webapp/frontend/src/App.tsxsquads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- squads/legal-analyst/webapp/backend/core/config.py
| # In-memory store for access tokens (use DB in production) | ||
| _access_tokens: dict[str, AccessToken] = {} |
There was a problem hiding this comment.
In-memory token store loses all access grants on restart.
The comment acknowledges this limitation, but server restarts or deployments will immediately invalidate all paid users' access tokens, causing support issues.
Consider at minimum persisting tokens to a file or using Redis for session persistence before production deployment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/core/stripe_service.py` around lines 100
- 101, The in-memory _access_tokens: dict[str, AccessToken] loses all tokens on
restart; change it to a persistent store (e.g., Redis or a file-backed DB) by
replacing the in-memory map with a persistence-backed accessor layer: create
functions get_access_token(key), set_access_token(key, AccessToken),
delete_access_token(key) that serialize/deserialize the AccessToken objects and
use a Redis client (or file/DB) under the hood, update all places that reference
_access_tokens to call these accessors (retain the AccessToken type and
serialization format), and add retry/expiry handling and basic error logging so
token reads/writes degrade gracefully.
| if STRIPE_WEBHOOK_SECRET: | ||
| event = stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET) | ||
| else: | ||
| import json | ||
| event = stripe.Event.construct_from(json.loads(payload), stripe.api_key) |
There was a problem hiding this comment.
Webhook signature bypass in production is a security risk.
When STRIPE_WEBHOOK_SECRET is empty, the code parses the payload without signature verification. This allows attackers to forge webhook events and grant themselves access.
🛡️ Suggested fix - fail closed when secret is missing
def process_webhook(payload: bytes, sig_header: str) -> dict[str, Any]:
"""Process Stripe webhook event."""
- if STRIPE_WEBHOOK_SECRET:
- event = stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET)
- else:
- import json
- event = stripe.Event.construct_from(json.loads(payload), stripe.api_key)
+ if not STRIPE_WEBHOOK_SECRET:
+ raise ValueError("STRIPE_WEBHOOK_SECRET not configured - webhook verification disabled")
+ event = stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if STRIPE_WEBHOOK_SECRET: | |
| event = stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET) | |
| else: | |
| import json | |
| event = stripe.Event.construct_from(json.loads(payload), stripe.api_key) | |
| if not STRIPE_WEBHOOK_SECRET: | |
| raise ValueError("STRIPE_WEBHOOK_SECRET not configured - webhook verification disabled") | |
| event = stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/core/stripe_service.py` around lines 194
- 198, The code currently falls back to stripe.Event.construct_from when
STRIPE_WEBHOOK_SECRET is empty, bypassing signature verification; change this to
fail closed: in the webhook handler check STRIPE_WEBHOOK_SECRET and if it's
missing or empty raise an explicit error/abort the request (e.g., throw an
exception or return a 4xx/5xx response) instead of calling
stripe.Event.construct_from, and ensure only
stripe.Webhook.construct_event(payload, sig_header, STRIPE_WEBHOOK_SECRET) is
used for verified parsing; reference STRIPE_WEBHOOK_SECRET,
stripe.Webhook.construct_event, stripe.Event.construct_from, payload and
sig_header when updating the logic.
| if event_type == "checkout.session.completed": | ||
| email = data.get("customer_email", "") | ||
| plan = data.get("metadata", {}).get("plan", "pro") | ||
| _grant_access(email, plan) | ||
| return {"status": "access_granted", "email": email, "plan": plan} |
There was a problem hiding this comment.
Access token generated but never delivered to the user.
_grant_access creates a token after checkout.session.completed, but the token is only stored server-side. The success redirect URL doesn't include the token, and App.tsx shows no mechanism to retrieve or store it. Users won't be able to authenticate after payment.
Consider one of these approaches:
- Include the token in the success redirect URL
- Store the token server-side and look up by email when the user lands on
/app - Use Stripe's customer portal for session management instead of custom tokens
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/core/stripe_service.py` around lines 203
- 207, The webhook handler for event_type "checkout.session.completed" calls
_grant_access which generates a token but never delivers it to the user; update
the flow so the token is retrievable by the client after checkout—either append
the generated token to the checkout success redirect URL, or persist the token
server-side keyed by customer email (data.get("customer_email")) and implement a
lookup endpoint the frontend (App.tsx) can call on /app to fetch and store the
token, ensuring _grant_access returns the token (or its ID) so it can be
included in the response/redirect.
| @app.get("/api/stripe/subscription/{email}") | ||
| async def stripe_subscription(email: str): | ||
| return get_subscription_status(email).model_dump() | ||
|
|
||
|
|
||
| @app.post("/api/stripe/cancel/{email}") | ||
| async def stripe_cancel(email: str): | ||
| return cancel_subscription(email) |
There was a problem hiding this comment.
Stripe subscription endpoints expose PII and allow cancellation by email without auth.
/api/stripe/subscription/{email} returns subscription details for any email, and /api/stripe/cancel/{email} can cancel any user's subscription. This allows enumeration and unauthorized cancellations.
These endpoints should require authentication to verify the caller owns the email address, or use session-based identity instead of email in the URL.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/backend/main.py` around lines 379 - 386, The
endpoints stripe_subscription and stripe_cancel currently accept an email path
param which exposes PII and permits unauthenticated enumeration/cancellations;
change them to require authenticated identity (e.g. a dependency injection like
get_current_user or session) and stop taking email in the URL: fetch the
caller's email from the authenticated principal or session, verify ownership (or
admin scope) before returning subscription data or performing cancellation, and
return only minimal, non-sensitive fields. Update stripe_subscription(email:
str) -> stripe_subscription(current_user) and stripe_cancel(email: str) ->
stripe_cancel(current_user) (or equivalent auth dependency), add an
authorization check in cancel_subscription to ensure current_user.email matches
the subscription owner (or user has admin role), and remove or redact any PII
from responses.
| # Criar .env | ||
| cat > .env <<ENVFILE | ||
| ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} | ||
| ANTHROPIC_MODEL=claude-sonnet-4-20250514 | ||
| API_PORT=${API_PORT} | ||
| UI_PORT=${UI_PORT} | ||
| STRIPE_SECRET_KEY=${STRIPE_SECRET_KEY:-} | ||
| STRIPE_PUBLISHABLE_KEY=${STRIPE_PUBLISHABLE_KEY:-} | ||
| STRIPE_WEBHOOK_SECRET=${STRIPE_WEBHOOK_SECRET:-} | ||
| STRIPE_PRICE_ID=${STRIPE_PRICE_ID:-} | ||
| APP_URL=${APP_URL:-http://${PUBLIC_IP:-localhost}} | ||
| ENVFILE |
There was a problem hiding this comment.
APP_URL defaults to localhost due variable initialization order.
Line 170 uses ${PUBLIC_IP:-localhost}, but PUBLIC_IP is only assigned on Line 327. In non-domain deployments this tends to persist APP_URL=http://localhost, which breaks external callback/redirect scenarios.
🌐 Suggested fix
WEBAPP_DIR="$APP_DIR/squads/legal-analyst/webapp"
cd "$WEBAPP_DIR"
+PUBLIC_IP="${PUBLIC_IP:-$(curl -fsS https://ifconfig.me/ip 2>/dev/null || echo localhost)}"
+if [ -n "$DOMAIN" ]; then
+ DEFAULT_APP_URL="https://${DOMAIN}"
+else
+ DEFAULT_APP_URL="http://${PUBLIC_IP}"
+fi
+
# Criar .env
cat > .env <<ENVFILE
...
-APP_URL=${APP_URL:-http://${PUBLIC_IP:-localhost}}
+APP_URL=${APP_URL:-${DEFAULT_APP_URL}}
ENVFILEAlso applies to: 327-327
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/deploy-hostinger.sh` around lines 160 - 171, The
APP_URL entry in the .env here-doc is using ${PUBLIC_IP:-localhost} before
PUBLIC_IP is assigned, causing APP_URL to default to localhost; move or
recompute APP_URL after PUBLIC_IP is set (or change the here-doc generation to
insert an APP_URL value derived from the same logic that assigns PUBLIC_IP).
Specifically, update the .env creation (the here-doc that writes APP_URL) so it
runs after the script sets PUBLIC_IP (or replace the inline expansion with a
later assignment like APP_URL="http://${PUBLIC_IP}" once PUBLIC_IP is available)
to ensure external callbacks/redirects use the actual public IP.
| const [showVSL, setShowVSL] = useState(() => { | ||
| const params = new URLSearchParams(window.location.search); | ||
| if (params.get("success") === "true") return false; | ||
| if (window.location.pathname === "/app") return false; | ||
| return true; |
There was a problem hiding this comment.
Enforce backend access validation before unlocking /app.
Access is currently controlled by URL conditions (/app or ?success=true) plus local state. This is bypassable and does not verify auth/subscription status before exposing app functionality.
Also applies to: 36-39
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/frontend/src/App.tsx` around lines 19 - 23, The
current useState initializer for showVSL (and related logic around
window.location.search and pathname) relies solely on client-side URL checks and
must be replaced with a server-validated decision: remove the URL-only gating
and instead call a backend access validation endpoint (e.g., an
auth/subscription check) from a useEffect on mount (use the existing
showVSL/setShowVSL symbols), await the response, and only setShowVSL(false) when
the backend confirms access; still respect the explicit ?success=true query or
other UI flags but only as secondary hints after the server returns success, and
ensure errors/defaults keep showVSL true to avoid accidental exposure.
| (docId: string, page?: number) => { | ||
| const doc = pdf.documents.find((d) => d.doc_id === docId); | ||
| if (doc) { | ||
| pdf.setActiveDoc(doc); | ||
| if (page) pdf.setActivePage(page); | ||
| setActiveView("documents"); |
There was a problem hiding this comment.
Use an explicit undefined check when applying referenced page.
if (page) skips valid 0 values. Use page !== undefined to avoid dropping boundary page references.
Suggested fix
- if (page) pdf.setActivePage(page);
+ if (page !== undefined) pdf.setActivePage(page);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| (docId: string, page?: number) => { | |
| const doc = pdf.documents.find((d) => d.doc_id === docId); | |
| if (doc) { | |
| pdf.setActiveDoc(doc); | |
| if (page) pdf.setActivePage(page); | |
| setActiveView("documents"); | |
| (docId: string, page?: number) => { | |
| const doc = pdf.documents.find((d) => d.doc_id === docId); | |
| if (doc) { | |
| pdf.setActiveDoc(doc); | |
| if (page !== undefined) pdf.setActivePage(page); | |
| setActiveView("documents"); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/frontend/src/App.tsx` around lines 132 - 137, The
page-check currently uses a truthy test (if (page)) which will skip valid page
0; update the conditional in the handler that looks up pdf.documents and calls
pdf.setActiveDoc/pdf.setActivePage (the anonymous callback taking (docId:
string, page?: number)) to use an explicit undefined check (page !== undefined)
before calling pdf.setActivePage so boundary page 0 is preserved; keep the
surrounding calls to pdf.setActiveDoc and setActiveView intact.
| try { | ||
| const res = await fetch(`${API_BASE}/stripe/checkout`, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ plan: planKey, customer_email: email }), | ||
| }); | ||
| const data = await res.json(); | ||
| if (data.checkout_url) { | ||
| window.location.href = data.checkout_url; | ||
| } |
There was a problem hiding this comment.
Handle non-2xx responses from checkout endpoint.
The code calls res.json() without checking res.ok, which means HTTP 4xx/5xx responses with JSON bodies will be parsed silently and may lack checkout_url.
🛡️ Suggested fix
const res = await fetch(`${API_BASE}/stripe/checkout`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ plan: planKey, customer_email: email }),
});
+ if (!res.ok) {
+ const errorData = await res.json().catch(() => ({}));
+ throw new Error(errorData.detail || `HTTP ${res.status}`);
+ }
const data = await res.json();
if (data.checkout_url) {
window.location.href = data.checkout_url;
+ } else {
+ throw new Error("Checkout URL not received");
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const res = await fetch(`${API_BASE}/stripe/checkout`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ plan: planKey, customer_email: email }), | |
| }); | |
| const data = await res.json(); | |
| if (data.checkout_url) { | |
| window.location.href = data.checkout_url; | |
| } | |
| try { | |
| const res = await fetch(`${API_BASE}/stripe/checkout`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ plan: planKey, customer_email: email }), | |
| }); | |
| if (!res.ok) { | |
| const errorData = await res.json().catch(() => ({})); | |
| throw new Error(errorData.detail || `HTTP ${res.status}`); | |
| } | |
| const data = await res.json(); | |
| if (data.checkout_url) { | |
| window.location.href = data.checkout_url; | |
| } else { | |
| throw new Error("Checkout URL not received"); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx` around lines 54 -
63, The fetch response from the checkout endpoint is parsed without checking
HTTP status; update the checkout flow in VSLPage.tsx (the try block using fetch,
variables res and data) to first check res.ok and handle non-2xx responses: if
!res.ok, parse the JSON/error body (or read text) to extract an error message
and surface it (throw or call setError/log) including res.status and the
message, otherwise continue to parse data and redirect using data.checkout_url;
ensure you still reference planKey and email when building the request and fail
fast on bad responses to avoid silently processing error payloads.
| <button | ||
| onClick={scrollToPricing} | ||
| className={`w-full py-3 rounded-xl font-semibold transition ${ | ||
| plan.popular | ||
| ? "bg-gradient-to-r from-amber-500 to-amber-600 text-white shadow-lg shadow-amber-500/25" | ||
| : "bg-white/5 border border-white/10 text-white hover:bg-white/10" | ||
| }`} | ||
| > | ||
| Assinar Agora | ||
| </button> |
There was a problem hiding this comment.
Fallback plan buttons don't trigger checkout.
When the plans API fails, the fallback static plans render with buttons that call scrollToPricing instead of handleCheckout. Users will be unable to subscribe.
🐛 Suggested fix
- <button
- onClick={scrollToPricing}
- className={`w-full py-3 rounded-xl font-semibold transition ${
+ <button
+ onClick={() => handleCheckout(plan.name.toLowerCase())}
+ disabled={loading === plan.name.toLowerCase() || !email}
+ className={`w-full py-3 rounded-xl font-semibold transition ${
plan.popular
? "bg-gradient-to-r from-amber-500 to-amber-600 text-white shadow-lg shadow-amber-500/25"
: "bg-white/5 border border-white/10 text-white hover:bg-white/10"
- }`}
+ } disabled:opacity-50 disabled:cursor-not-allowed`}
>
- Assinar Agora
+ {loading === plan.name.toLowerCase() ? "Processando..." : "Assinar Agora"}
</button>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <button | |
| onClick={scrollToPricing} | |
| className={`w-full py-3 rounded-xl font-semibold transition ${ | |
| plan.popular | |
| ? "bg-gradient-to-r from-amber-500 to-amber-600 text-white shadow-lg shadow-amber-500/25" | |
| : "bg-white/5 border border-white/10 text-white hover:bg-white/10" | |
| }`} | |
| > | |
| Assinar Agora | |
| </button> | |
| <button | |
| onClick={() => handleCheckout(plan.name.toLowerCase())} | |
| disabled={loading === plan.name.toLowerCase() || !email} | |
| className={`w-full py-3 rounded-xl font-semibold transition ${ | |
| plan.popular | |
| ? "bg-gradient-to-r from-amber-500 to-amber-600 text-white shadow-lg shadow-amber-500/25" | |
| : "bg-white/5 border border-white/10 text-white hover:bg-white/10" | |
| } disabled:opacity-50 disabled:cursor-not-allowed`} | |
| > | |
| {loading === plan.name.toLowerCase() ? "Processando..." : "Assinar Agora"} | |
| </button> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@squads/legal-analyst/webapp/frontend/src/pages/VSLPage.tsx` around lines 508
- 517, The fallback static plan buttons currently call scrollToPricing which
prevents checkout; update the button's onClick in the fallback rendering (where
plan is used and the class conditional on plan.popular) to call handleCheckout
with the appropriate plan identifier/object instead of scrollToPricing so
clicking a fallback plan initiates the subscription flow; ensure you pass the
same argument shape that the existing handleCheckout handler expects (e.g., plan
or plan.id).
Summary
This PR introduces the Legal Analyst Squad, a comprehensive AI-powered system for judicial process analysis. It combines a Python FastAPI backend with a React frontend to provide intelligent legal document analysis, jurisprudence research, and procedural guidance based on Brazilian judicial standards (CNJ compliance, DATAJUD schema, and judicial precedents).
Key Changes
Backend Architecture (
squads/legal-analyst/webapp/backend/)core/agent_engine.py): Core orchestration system routing requests through a 15-agent pipeline with intent detection and phase management (Triagem → Pesquisa → Análise → Fundamentação → Validação)core/chat_manager.py): Session management with context windows and agent routingcore/document_store.py): In-memory document management with cross-referencing and clip supportcore/pdf_processor.py): PDF extraction, text/image processing, clipping, and search capabilitiesmain.py): REST API endpoints for chat, document upload, agent management, and legal document draftingcore/models.py): Pydantic schemas for chat messages, sessions, documents, agents, and referencesFrontend Application (
squads/legal-analyst/webapp/frontend/)ChatInterface.tsx): Message composition with document references and considerationsPDFViewer.tsx): Multi-document viewer with search, pagination, clipping, and thumbnail supportAgentPanel.tsx): Agent discovery and selection with tier-based organizationLegalEditor.tsx): Document drafting interface for legal pieces (contrarrazões, recursos, etc.)useChat,useAgents,usePDFfor state managementAgent Definitions (15 specialized agents)
Each agent has detailed markdown definitions with YAML configuration, persona, scope, and operational guidelines.
Workflows & Templates
Knowledge Base & References
data/legal-kb.md): Mission, architecture, frameworks, and compliance overviewDeployment & Configuration
deploy.sh: Multi-platform deployment (local, Hostinger, Railway, Fly.io, Render, Vercel)deploy-hostinger.sh:https://claude.ai/code/session_01Eo66L1GycRXGVeC5q2qk1G
Summary by CodeRabbit
New Features
Pipeline
Documentation
Infrastructure