AI Engineering · React · Gemini 2.5 Pro

Prompt rewrites aligned with each target LLM, ready for production teams.

The LLM Prompt Optimizer is a React + Vite SPA that ingests draft prompts, expands implicit requirements, and ships structured instructions tailored to Gemini, Claude, ChatGPT, or Llama. It prioritises determinism, anti-hallucination guardrails, and local-first history so portfolios stay trustworthy.

4LLM-specific optimization frameworks
12Prebuilt templates with placeholder parsing
<2sMedian Gemini round-trip
0Server dependencies (pure front-end)
promptOptimizer.ts Live request

const result = await optimizePrompt({
  prompt: "Outline an AI compliance briefing",
  provider: targetLLM.CLAUDE,
  variables: {
    audience: "Risk Committee",
    region: "EU",
    length: "400 words",
  }
})

// ✅ Returns
// · XML sections (<context>, <task>, <constraints>)
// · Negative instructions
// · Success metrics + QA checklist
            

Overview

Built for prompt engineers who need clarity over spectacle.

The application runs entirely in the browser, stores settings locally, and keeps every optimisation reproducible.

Template-driven inputs

Content, code, SQL and marketing templates surface variables so teams can fill context without editing raw text.

History intelligence

Searchable, favoritable history entries capture timestamp, target LLM, and the optimized output.

Provider toggle

Switch between Gemini SDK and any OpenAI-compatible endpoint through a modal—no rebuilds required.

Capabilities

Operational features showcased in the public repo.

LLM-aware frameworks

Each model family receives a tailored system prompt emphasising structure, tone, and guardrails.

Clipboard automation

Copy buttons target both draft and optimized prompt fields, with feedback loops for accessibility.

Progress instrumentation

Loading states, progress bars, and error surfaces keep the single-page flow predictable.

Local-first credentials

API keys and base URLs stay in localStorage, making GitHub Pages deployments risk-free.

Architecture

Transparent layering that keeps maintenance minimal.

UI layer

App.tsx orchestrates state, variable parsing, template selection, and history tabs.

Service adapter

services/geminiService.ts detects provider, composes system prompts, and handles fetch or SDK calls.

Settings hook

useSettings.ts persists provider configs and temperature while guarding against invalid payloads.

Workflow

Four-step loop for ideation, rewriting, and reuse.

01

Seed

Choose a template or paste a custom brief; placeholders instantly appear as inputs.

02

Configure

Set provider, API key, base URL, and temperature via the modal; values persist locally.

03

Optimize

Anti-hallucination instructions and formatting constraints accompany every request.

04

Reuse

Copy, favorite, or rehydrate any history item for the next iteration.

Contact

Open to walkthroughs, audits, and collaborations.

Location

Lisbon · Remote-friendly