What it does
POST text and receive a vector embedding from OpenAI text-embedding-3-small (1536 dimensions). No account, no API key, no subscription — pay per 1k tokens via x402. Single input or batch (array). Batch of ≥100 inputs gets reduced pricing ($0.00003/1k tokens vs $0.00005/1k standard). Token counting uses cl100k_base (same as OpenAI) for honest billing — you can verify the token count yourself. Input tokens and model version are always returned so you know exactly what you paid for. Models: openai-3-small (default, 1536d), openai-3-large (3072d). Composes naturally with ScrapePay → MarkdownOpt → EmbedPay → vector DB.
When to use it
- RAG pipeline: ScrapePay → MarkdownOpt → EmbedPay → insert into Qdrant/Pinecone
- Semantic search over agent memory: embed each memory chunk for cosine similarity retrieval
- Batch-embed a corpus of documents at batch pricing before ingestion
- Duplicate detection: embed two texts and compare cosine similarity
Request schema
{
"input": "The quick brown fox jumps over the lazy dog",
"model": "openai-3-small"
} Response schema
{
"success": true,
"embedding": [
0.0123,
-0.0456,
0.0789
],
"model": "openai-3-small",
"dimensions": 1536,
"input_tokens": 9,
"model_version": "text-embedding-3-small (2024-01-25)",
"payment_hash": "0x..."
} Code example — TypeScript via MCP
Install the MCP server once; all 22 services become tool calls.
// Configure @melis-ai/x402-tools-mcp in your MCP client
// Then call the tool:
const result = await mcpClient.callTool("embedpay", {
"input": "The quick brown fox jumps over the lazy dog",
"model": "openai-3-small"
});
console.log(result);
// ["success","embedding","model","dimensions","input_tokens","... → MCP setup guide Code example — Python via direct HTTP
import requests
# x402 payment header must be set by your wallet client
# See x402.org for client libraries
headers = {
"Content-Type": "application/json",
"x-payment": "<signed-x402-payment-header>",
}
resp = requests.post(
"https://embedpay.melis.ai/embed",
json={
"input": "The quick brown fox jumps over the lazy dog",
"model": "openai-3-small"
},
headers=headers,
)
print(resp.json()) Code example — curl with internal key bypass
For testing with an issued internal key (skips x402 payment flow):
curl -X POST https://embedpay.melis.ai/embed \
-H "Content-Type: application/json" \
-H "x-internal-key: YOUR_KEY" \
-d '{"input":"The quick brown fox jumps over the lazy dog","model":"openai-3-small"}' How is this different from alternatives?
EmbedPay vs OpenAI Embeddings API
OpenAI charges $0.02 per million tokens ($0.00002/1k) — cheaper at scale, but requires an account, API key, and rate limit management. EmbedPay wraps the same model at 2.5× the wholesale rate for the x402 convenience premium: no signup, composable with other x402 services.
EmbedPay vs Voyage AI
Voyage voyage-3-lite is very competitive at $0.016/M tokens and has better retrieval quality on some benchmarks. EmbedPay is the right choice when you want zero-friction embedding in an existing x402 agent stack without adding another provider.
FAQ
Does it work without an account?
Yes. x402 is account-less. Your agent's wallet signs the payment and retries automatically. No registration, no API key, no subscription.
What happens on failure?
Returns HTTP 413 if any input exceeds 8,000 tokens. Returns HTTP 400 for empty or non-string inputs. Returns HTTP 503 if nomic-embed is requested but self-hosted backend is not configured. None of these settle payment.
What is the rate limit?
600 requests per minute per IP. 10M tokens per IP per hour.
Is this open-source?
The service code is closed-source for security reasons. The MCP wrapper that calls it is open-source and MIT-licensed: github.com/mizukaizen/x402-tools-mcp .
Who built this?
Part of the melis.ai agent infrastructure stack. Running on a dedicated Helsinki VPS since early 2026. Contact sean@melis.ai.