How custom AI development actually works.
Six stages, working software from week 1, fixed price after a one-page spec, weekly Friday demos. No theatre, no committee, no surprise invoices.
Five principles, applied to every project
These aren’t marketing values. They’re the things that have made our projects ship.
Start from working code, not a blank page
Every custom AI we ship starts from the OpenClaw foundation — a battle-tested AI agent runtime with 60+ integrations already wired in. We extend it for your business instead of writing from scratch. That collapses what used to be a 6–12 month build into 3–12 weeks.
Show working software every Friday
You see a live demo every week from week 2 onwards. No 12-week silent build then a big reveal. If the AI is heading the wrong direction, you see it in week 3 — not week 11.
One developer, one decision-maker
I work directly with your point of contact. No account managers, no PMs translating between you and the engineer. Faster decisions, fewer broken games of telephone.
Fixed price, signed scope
You sign off on a written technical spec before any code is written. Scope creep is your decision, not a surprise invoice. Every change-request is quoted in writing.
Ship to production, not a demo
Prototypes that look great in a sandbox and fall over in production are the most expensive thing in AI. We test against real data, real edge cases, real users — before launch.
The six stages
Same process whether the project is €5K or €500K — only the depth changes. Each stage has clear inputs from your side and concrete outputs from ours.
Discovery
Understand the workflow.
Before scoping anything we sit with you (call, screen-share or in-person if you're in Curaçao or the Netherlands) and watch the actual work happening. The point is to understand the messy reality, not the pitch-deck version. We finish discovery with a one-page process diagram showing exactly where AI fits.
- ·Your time (1–2 hours)
- ·Read access to relevant systems
- ·A volunteer who does the work today
- Process diagram
- Scope draft
- Honest go / no-go recommendation
Technical design
Decide what to build, in writing.
We write a 3–8 page technical spec that nails down: what the AI does, what tools it has, what data it can access, how the admin panel works, what success looks like. You sign it. Quote becomes fixed price.
- ·Sign-off on scope
- ·Access to one test environment
- ·A point person on your side
- Written technical spec
- Fixed-price quote
- Project kick-off date
Build (with weekly demos)
AI logic + integrations + admin UI.
The bulk of the project. We build in vertical slices: by end of week 1 of build there's a thing you can poke at, even if it's 10% of the final scope. Every Friday: live demo, screen-recording, list of next-week priorities. You shape the AI as it grows.
- ·~2 hours/week of your time for demos
- ·Decisions on edge cases as they come up
- Working software, deployed to staging
- Friday demos & recordings
- Audit log + evaluation harness
Test & harden
Prove it survives reality.
Edge case testing, security review, real users running real tasks. We pre-build an evaluation harness — a set of test cases — so future changes don't silently break old behaviour. This phase is where mediocre AI projects ship and good ones don't.
- ·Real test data
- ·3–5 user volunteers for UAT
- ·Security/IT contact for review
- Edge-case test report
- Security review sign-off
- UAT user feedback addressed
Launch
Production deploy + team training.
Cut over to production — your server or our managed cloud at noraclawd.com. Recorded training session for your team. Documentation for admins. A WhatsApp group / Slack channel for the first 30 days so questions get answered fast.
- ·Production credentials
- ·Team availability for training session
- Live in production
- Recorded training
- Admin docs
- 30-day support channel
Steady-state
Iterate, monitor, evolve.
Custom AI is not "done" at launch — it gets better as your team uses it. Most clients move onto a managed plan from €249/mo for hosting, monitoring, model upgrades and ongoing tweaks. Or you self-host and call us for changes.
- ·Real usage
- ·Feedback loop
- Monthly improvements
- Model upgrades as new ones ship
- Optional new tools / integrations
Readiness checklist
Six questions to answer before kicking off a custom AI project. If most are “yes”, you’re ready. If most are “not sure”, the discovery call is exactly where to start.
01.Do we have a clearly-defined process this AI will improve?+
Why it matters: Vague problems make for vague AI. "Reduce admin time on invoices" is workable. "Use AI to be more efficient" is not.
02.Is the data the AI needs accessible?+
Why it matters: If your data lives in a 1990s system with no API, building the connector might be 50% of the work.
03.Who decides what the AI should do — and who has authority to change it?+
Why it matters: Single decision-maker = fast project. 4-person committee = slow project.
04.What does success look like, in numbers?+
Why it matters: Hours saved, error rate, conversion, ticket deflection — pick one or two and we'll measure it from week 1.
05.Where will it be hosted?+
Why it matters: Managed cloud is fastest. On-premise is slower but gives you full data control.
06.What is the budget envelope, ballpark?+
Why it matters: Helps us scope realistically. €5K and €50K are both fine — but they buy different things.
Process FAQ
How is custom AI development different from a software project?+
Custom AI uses LLMs and AI agents at the core, but most of the work is still software engineering — integrations, admin panels, error handling, deployment. The "AI part" is typically 20–30% of the codebase. The rest is the plumbing that turns a prompt into a reliable production system.
Do you train custom models?+
Almost never. For 95% of business use cases, frontier models (Claude, Gemini, GPT) with the right tools, memory and prompting outperform a custom-trained model by a wide margin — and cost a fraction. We do fine-tune when there's a clear case (proprietary terminology, regulated outputs, narrow classification tasks). Most of the time, agentic systems beat fine-tunes.
What happens if it doesn't work?+
We catch this in week 1–2 of build with a working prototype. If it's not viable, you get an honest "this won't work and here's why" — not 8 weeks of dressed-up failure. We have walked away from projects in week 2; that's better than charging for a build that won't ship.
Can we move suppliers later?+
Yes. Everything we ship runs on open standards — OpenClaw is open source, integrations use standard APIs, the model layer is provider-agnostic. You're not locked in. Your code is your code.
What does ongoing support look like?+
Two flavours. Managed (€249–€749/mo) — we host, monitor, update, fix. Self-hosted — you run it; we're available on a retainer or per-change basis. Either way, no surprise vendor lock-in.
Ready to start with discovery?
Free 30-minute call. By the end of it I’ll tell you whether custom AI is a fit for what you’re trying to do — honestly, including if the answer is no.
Mark Austen, Founder — replies within 24 hours