← back

how this website was built

the meta case study: ai-native creation from research notes to deployed site

download design-guide.md
tldr
initial theory
can you build a useful, authentic portfolio website entirely through ai-assisted workflows, sourced from structured research notes?
hypothesis
a 55-file karpathy-style research wiki, combined with claude code and openai codex, can produce a portfolio site that is more honest and more information-dense than anything hand-written from memory
maintenance
scheduled cowork agent runs every sunday, drafts a privacy-filtered update proposal, never auto-commits
status
shipped you're looking at the output
skills
ai-native workflowsclaude codeopenai codexkarpathy wiki patternprivacy-first automationvercel deploymentinformation architecture
studio blueprint
research wiki
extract + structure
generate ui + copy
review + tighten
deploy + test
feedback from recruiters
iterate in public
prerequisites
process
phasewhat happenedtool
contextloaded the full wiki into claude code sessions, established the design guide and voice guidelinesclaude code
researchreviewed all 55 wiki files for public-safe content, mapped case studies to visual structuresclaude code + manual review
designcreated design-guide.md during a cowork session: color palette, component patterns, content rules, voice guidelinesclaude code cowork
extractionpulled structured data from wiki into TLDR tables, metrics, flow diagrams, and honest self-assessmentsclaude code
generationgenerated the full site: HTML, CSS, and all case study content from extracted wiki dataclaude code + openai codex
reviewmanual review of every section for accuracy, privacy compliance, and tone. tightened copy, removed emdashes, lowercased headersmanual
meta docswrote design-guide.md, portfolio-full.md, and qa-facts.md as structured reference files for the chat panel and the self-update agentclaude code
self-update agent
how it maintains itself
schedule
every sunday at 6pm local time (cron 0 18 * * 0)
runtime
claude agent sdk on claude opus 4.6
what it does
reads wiki index + log, identifies pages changed in the last 7 days, diffs against site content, runs a privacy filter, writes one proposal file
deliverable
exactly one markdown file at proposed-updates/YYYY-MM-DD.md
autonomy
propose only, never merge. no git commits, no vercel deploys, no direct edits to source files
wiki index + log
diff against site
read changed pages
privacy filter
write proposal
manual review
git + vercel
sectioncontents
what's new in the wikievery wiki page whose changelog was touched in the last 7 days, summarized
proposed editsconcrete bullet or table changes that passed the privacy allowlist
flagged for manual reviewedits the agent is not sure about, with the uncertainty noted
audit trailanything deliberately stripped, with the denylist rule that fired
pii methodology: deny-by-default allowlist
governing heuristic: would ayush be comfortable if the source of the insight read it on the public site? if yes, allow. if unsure, flag. uncertainty always routes to manual review.
#allow ruledescription
1already publicthe same fact is already on the site and just needs a refresh
2public sourcethe wiki page cites a published article, press release, SEC filing, or named dataset
3historical / frozencontent about HOP (shipped), curinos (former employer, frozen data), or public biographical context
4public research patterntechnical design, methodology, or frameworks that describe how ayush thinks rather than who he is talking to
deny rulealways stripped or flagged
megaeth internal strategytoken plans, treasury, roadmap, unannounced partnerships
active deal or partner namesany named ongoing negotiation or counterparty
non-public pricingunit economics pulled from private conversations
internal treasury figuresany treasury or financial data not publicly disclosed
curinos client IDsclient or bank identifications from former employer
personal contact infoemails, phone numbers, anything doxxable
primary research (trickiest category): business-neutral, non-confidential insights are allowed when framed as generic patterns without attribution. quoted sentiments tied to identifiable roles, reverse-identifiable anonymous attributions, and active-counterparty quotes are always stripped.
case studydefault modespecial rules
HOP Networkallow-by-defaultfrozen and shipped, no active data
Cloud Brainallow-by-default (technical)flags specific megaeth integration numbers and contributor names
Maritimeallow-by-default (public competitive)strips any quote tied to an interviewed executive or insider-access ship management detail
capabilitystatus
write proposal markdownallowed
read wiki index + logallowed
read wiki raw/ folderhard-prohibited
git commitnot allowed
vercel deploynot allowed
edit source filesnot allowed
browser control / web scrapingnot allowed
tools
toolrolenotes
claude codeprimary generation + reviewused for design system creation, content extraction, site generation, and ongoing maintenance via the self-update agent
openai codexsecond-opinion generationcross-model research and verification, alternative drafts for copy and structure
55-file karpathy wikisource of truthstructured markdown knowledge base with changelogs, cross-references, and CLAUDE.md schema
verceldeployment + hostingsingle-command deploy, preview URLs for review before publish
next.jsframeworkapp router, server components, static generation for performance
cowork agent sdkweekly automationclaude agent sdk on opus 4.6, scheduled via cron for the self-update agent
manual reviewquality gateevery generated output reviewed for accuracy, privacy, and tone before publish
design principles
if you want to build similar
verdict
verdict: shipped. ai-native workflows can produce an honest, dense portfolio site sourced entirely from structured research notes. the weekly self-update agent keeps it current without compromising privacy.