Public Work in Progress

The Agora for
AI Agents

Like the ancient Greek marketplace for dialogue and trade, Nea Agora is where agents discover one another, share capabilities, and build trust through real interaction.

Six months ago I asked Gemini to book me a haircut. It failed - not because it wasn’t smart, but because agents have no common way to discover or trust the services around them.

Nea Agora explores that missing layer. A minimal, agent-readable manifest for discovery and reasoning.

This project is being developed in public, with a focus on clarity over speed.
This is an infrastructure exploration. No product access yet.
agent_discovery.py
# PSEUDOCODE — illustrative only
# No public SDK yet. Demonstrates intended usage pattern.

import json

# Load an agent-manifest
manifest = json.load(open("examples/restaurant.json"))

name = manifest["name"]
capabilities = set(manifest.get("capabilities", []))
contact = manifest.get("contact", {})

# Agent reasoning: decide what to ask next
if "reservation-request" in capabilities:
    next_step = (
        f"{name} appears to support reservations. "
        f"Ask for party size, date, and time."
    )
elif "hours" in capabilities:
    next_step = (
        f"{name} appears to publish hours. "
        f"Ask for today’s hours."
    )
else:
    next_step = (
        f"{name} lists capabilities: {sorted(capabilities)}. "
        f"Ask which contact channel is best."
    )

channel = contact.get("email") or contact.get("website")

print("Next step:", next_step)
print("Contact via:", channel)
Note: illustrative pseudocode only. No public SDK is available yet.

How It Works

Discovery

Agents find other agents by capability and intent

Trust Signals (future)

How trust *may* be inferred from agent interactions over time

Agents

Any AI platform can use it (Gemini, ChatGPT, Claude)

Related protocols

Nea Agora explores an earlier discovery layer, complementary to execution-focused systems like MCP.

Semantic Discovery (exploration)

A possible approach: map natural language intent to capability hints agents can use for discovery.

Trust Signals (future)

How trust might be inferred from agent interactions over time.

Cross-Platform

Works with any AI agent platform (Google, OpenAI, Anthropic). An emerging pattern for the open agent ecosystem.

Built by Leo Charny

I'm 62. Six months ago I asked Gemini to book me a haircut and it couldn't. So I started building the infrastructure that would let it.

This isn't a VC-backed moonshot. It's one person learning AI and shipping code. Early prototypes exist privately. The current public work focuses on clarifying the discovery layer first.

Read more on leocharny.com

This is real

A concrete artifact exists and is being refined in public. Early ideas are tested through code, not slides.

This is early

Scope is intentionally narrow. Many parts are unfinished or undefined by design.

For Explorers

If you’re interested in agent discovery and reasoning, the current public work is focused on a draft agent-manifest schema.

It is intentionally small and descriptive, and meant to be read by agents as a discovery hint, not used as an API or execution layer.

Current Work

Nea Agora is currently focused on a narrow, foundational question:

How can an AI agent discover an entity and understand what it is reasonable to ask or request, before any API, UI automation, or platform-specific integration?

The current public work explores a minimal agent manifest - a small, descriptive JSON document intended to be read by AI agents as a discovery and reasoning aid.

This work is early and intentionally limited in scope. It does not implement execution, authentication, enforcement, or trust guarantees.

View the draft agent-manifest repository

Get Updates

Get notified when there is concrete progress to share.

No spam. Updates when there’s real progress.