How an OpenAI Co-founder Revolutionised Local LLM Thinking (and Why Companies Should Consider Creating Their Own)
By Ryan Ching
In 2026, AI use in business is the major talking point above all else. Aside from companies wrestling with how to get the best out of AI and developing cohesive AI usage policies, the next dilemma faced is managing the sprawl of AI tools and enterprise subscriptions paid for to keep up with the Joneses.
If your company has trod the path of adopting a Co-Pilot / ChatGPT / Claude usage + AI tools structure, you are probably experiencing two unsettling realisations: 1) Escalating costs, and 2) Knowing deep down every customer complaint, internal query, sensitive document read is being logged and processed on someone else's server in a data centre you have no idea about.
So, is it time to go local?
Build your own LLM, have the data reside on your own server, manage costs and ensure data sovereignty? Previously this step required capital expenditure, project planning and some serious technical know-how. But that is all changed thanks to OpenAI co-founder Andrej Karpathy's casual blog post about LLM knowledge bases. Utilising his methodology, we describe how to build your own LLM fit for your enterprise and executable without technical knowledge, all for under $10k investment.
A note before we proceed: everything described here is achievable without deep technical knowledge. That said, "achievable without deep technical knowledge" and "impossible to spectacularly misconfigure" are not the same sentence. Build at your own risk, enjoy the process, and if at any point you find your employees staring at you expectantly whilst you are secretly floundering, professional consultancies like 3peat.ai exist precisely for this moment.
Step 1: Hardware
There are several out-of-the-box options available for users looking to build their own AI workstations. If you are in the Apple ecosystem then the Mac Studio is the way to go. Otherwise, go with NVIDIA's DGX Spark with NemoClaw, which has built-in enterprise controls.
Step 2: The Knowledge Base
The Old Way: RAG (Retrieval Augmented Generation)
All LLMs need to run off a knowledge base, this is where you load it up with your company documents, processes, databases and whatnot. How RAG worked was that it would chunk your documents into fragments, convert them into mathematical vectors, store them in a database, and at query time the AI hunts for the most relevant fragments.
It worked. It also failed silently, required specialists to configure, and had a structural problem nobody talked about enough: the model had to rediscover knowledge from scratch on every single query. No accumulation. No compounding.
The New Way: LLM Wiki
Karpathy's theory is annoyingly simple and it works: the LLM compiles source documents into a structured, interlinked set of markdown pages, a wiki, that the agent reads directly when answering questions. Think of it as an indexed library catalogue that any non-technical staff member can open and review, with the LLM adding, indexing and updating cross-references every time it processes a new document and learns more.
The practical upside for an SME: your entire knowledge base, product FAQs, pricing schedules, process guides, compliance documentation, fits comfortably within this pattern without any specialist infrastructure. No vector databases, no embedding models. Just a folder of markdown files your team can open in any text editor, verify, and correct.
Step 3: Configuring It to Do Stuff
The simplest way to understand this is through customer service, because every organisation has some version of it and most of them find some version of it exhausting.
Your local LLM, trained on your product documentation, pricing, returns policy, and the last two years of frequently asked questions, becomes a CS agent that knows your business. Not ChatGPT's version of your business, but your actual products, your actual pricing, your actual edge cases. It handles routine queries, escalates what it can't resolve, and doesn't hallucinate your competitor's return policy when a customer asks about yours.
The same logic applies across the organisation. Finance gets an agent that processes invoices against your supplier database and flags discrepancies before they become problems. QC gets one that reads inspection reports against your quality standards and logs deviations without waiting for someone to remember to log them.
A few honest caveats. This is still early. Implementation requires someone who understands both the technology and your business well enough to configure it sensibly, and a poorly built wiki produces a confidently wrong agent, which is arguably worse than no agent at all. If that sounds like a problem worth outsourcing, 3peat.ai was built for exactly this.
But here is the thing worth considering: every query your staff runs through ChatGPT today, every document your team feeds to Claude, that is your company's institutional knowledge, accumulating on someone else's infrastructure, making someone else's model smarter.
At some point, that is worth keeping on your own desk.
Ready to use the 3peat AI Framework Builder?
Use the 3peat AI Framework Builder to list your AI systems, classify risk, and generate a practical governance framework your team can implement immediately.
3peat AI Framework Builder