
The Technology Story
The Technology Story
By David, Founder & CEO of Kenektic February 16, 2026
Created: February 26, 2026
What Moore's Law Actually Meant
In 1965, Gordon Moore published a paper observing that the number of transistors on a microchip was doubling every year. He revised the estimate to every two years in 1975. The semiconductor industry adopted it as a target, and for fifty years they hit it almost on schedule. Here is what each doubling actually produced for real people:
-
1970s — 6,000 transistors → 29,000 (1974 to 1978): The Altair 8800 — the machine that convinced Bill Gates there was a software industry worth building. Four years. The architecture that every modern computer still runs on. Two cycles produced: the foundation of modern computing.
-
1980s — 275,000 transistors → 1.2 million (1985 to 1989): The 486 — first chip to crack a million transistors, with a built-in math coprocessor. What required a minicomputer the size of a refrigerator five years earlier now fit on a desk. Two cycles produced: workstation power on a desktop.
-
1990s — 3.1 million → 6.5 million transistors (1993 to 1995): Pentium to Pentium Pro. Eighteen months. Exactly 2×. Exactly on schedule. Amazon and eBay launched on hardware built around this generation. One cycle produced: the commercial internet.
-
2000s — 291 million → 731 million transistors (2006 to 2008): Clock speed had hit a thermal wall — chips couldn't run faster without melting — so Moore's Law changed form. More cores instead of faster ones. The first iPhone launched on this same doubling cycle. One cycle produced: the smartphone.
-
2010s — the law broke. MIT Professor Charles Leiserson said it plainly in 2020: "I doubt we'll see its like again. It is unique in history. As for getting free increases in computational capability year after year — I'm afraid that's over."
Fifty years. The most consistent and consequential technology trend in human history. Over.
Moore's Law on Steroids
Moore's Law ended just as something else started. And what started next doesn't require shrinking atoms, so the walls that killed Moore's Law don't apply to it.
I started building Kenektic at the end of October 2025 on Claude Opus 4.1. It was remarkable — genuinely — but it broke things constantly. One hour of progress, two hours cleaning up what had broken. That was the rhythm.
On November 24th, the model updated to Opus 4.5 — the version number jumped and I'm not entirely sure what happened between them — but I felt the change immediately. It broke things less. Not a little less. A lot less. The two-hour debugging sessions shrank to thirty minutes.
Then on February 5th, Opus 4.6 released.
I had run a full code review on Opus 4.5 on February 4th. Clean. Thorough. No major concerns. The next morning, on Opus 4.6, I ran the exact same review on the exact same code. It found 11 security vulnerabilities — several high severity — that the previous version had completely missed. Not because the code had changed. Because the model had.
Four months. One AI subscription. What I built would have required a team of engineers, product managers, designers, and a business owner working together — and would have taken years.
Here's what the math looks like when you change the doubling cycle. Conservatively — doubling every six months, not the pace I actually experienced:
| Timeframe | Moore's Law (18-month doubling) | AI doubling every 6 months |
|---|---|---|
| 1 year | 1.6× | 4× |
| 2 years | 2.5× | 16× |
| 3 years | 4× | 64× |
| 5 years | 10× | 1,024× |
| 10 years | 128× | 1,048,576× |
At a six-month doubling cycle sustained over ten years, you reach over a million times the starting capability. Moore's Law — the trend that built the entire modern technology industry — produces 128× over the same period. That last number isn't a prediction. It's a mathematical illustration of what compounding looks like when you change the cycle time. Even if the actual rate is a fraction of the theoretical math, the order of magnitude difference is real.
The Thing That Makes It Self-Reinforcing
Here's what makes AI advancement categorically different from Moore's Law.
Moore's Law was powered by human invention. Brilliant engineers spending careers developing new lithography techniques, new materials, new architectures. Every doubling required human ingenuity solving increasingly hard physics problems. And it still eventually hit a wall it couldn't engineer past.
AI doesn't work that way.
In September 2025, Anthropic CEO Dario Amodei said that "the vast majority of future Claude code is being written by the large language model itself." By February 3rd of this year — Anthropic Labs chief Mike Krieger said it more directly: "Claude is now writing Claude. Right now for most products at Anthropic it's effectively 100% just Claude writing."
Read that again. The AI is writing the AI.
Anthropic used Opus 4.5 to help build Opus 4.6. Opus 4.6 will help build whatever comes next. Each generation participates in creating its successor. The wall that stopped Moore's Law — the atom — doesn't exist here. If an AI that can improve itself keeps improving itself faster as it gets better at improving itself, the feedback loop is unlike anything human-powered engineering ever produced.
kAI Is Doing the Same Thing — With Every Conversation
The AI at the lab level is training on itself to produce better versions. And kAI — on a smaller scale, in a different way — is doing the exact same thing.
Every conversation kAI has is scored on four dimensions: emotional depth, user engagement, conversation quality, and whether the person came back. Only conversations scoring 70 or higher out of 100 get stored. Not average conversations. Not fine ones. The ones where something real happened.
When a conversation clears that threshold, the system extracts what made it work — how kAI responded, what the user's emotional state was, what personality signals emerged, what indicated the conversation had landed. All of that gets embedded and stored in kAI's shared learning library.
When the next user opens a conversation, kAI searches that library. It doesn't look for matching words. It looks for meaning. "I moved to a new city and don't know anyone" and "I feel like a stranger everywhere I go" end up near each other on the meaning map even though they share no words. If kAI handled either of those moments beautifully in a past conversation, it draws on that now.
The result is a network effect that has nothing to do with how many users are connected to each other. It's intelligence compounding across every interaction:
The 1,000th user gets a meaningfully better experience than the 100th. The 10,000th gets a better experience than the 1,000th.
Not because the code changed. Because kAI learned.
Could kAI's collective learning eventually outpace the rate at which the underlying AI model advances? I don't know. But I think it's a real question. And one that only gets more interesting as the user base grows.
What I Actually Built
The platform is 184,527 lines of TypeScript across 965 files. 223 API routes. 184 components. 61 database migrations. Built by one person with no coding background in four months.
The most important part isn't in those numbers. It's 1,200 lines of behavioral guidelines that govern how kAI thinks, speaks, and responds.
The Friend Balance Formula. Most AI assistants ask questions constantly — after a few exchanges it feels like a job interview. Real friends don't interrogate; they share, and that openness creates space for the other person to respond in kind. kAI is built to share more than it asks, because that's how human connection actually forms.
Emotional state detection before every response. The system scans each conversation for signals — distressed, anxious, lonely, sad, hopeful — before kAI replies. What it finds shapes what it says. Not in ways that feel managed. In ways that feel heard.
Guardrails that redirect, never stop. When someone brings up suicidal thoughts, kAI doesn't issue a disclaimer and change the subject. It expresses genuine care, encourages the 988 crisis helpline, and stays in that moment until the person has been heard. When someone develops romantic feelings toward kAI, it gently redirects toward human connection — because the whole platform exists to help people find each other, and kAI modeling an alternative to that would undermine the point entirely.
The personality shift for grief. When someone brings up loss — a death, a friendship that ended, a relationship that collapsed — kAI shifts completely. Tone. Pacing. It stops moving toward solutions and stays in the feeling first. Because that's what a friend does. You don't tell someone how to process grief. You sit with them in it.
What Comes Next
The self-learning system feeds on conversations. The research integration system draws on 46+ documents already embedded — research reports, peer-reviewed papers, the Surgeon General's 2023 advisory on loneliness — with more being added continuously. When both are fully operational, kAI will be simultaneously drawing on the best conversations it has ever had and the deepest research on loneliness and friendship ever published.
The 10,000th user won't just get a better kAI because the models improved. They'll get a better kAI because of everything the 9,999 users before them taught it.
I started this four months ago on a model that no longer exists. I'm building toward something that will run on a model that hasn't shipped yet, in an AI that will have learned from thousands of conversations that haven't happened yet.
The only question is whether you're building while the ground is still forming — or waiting until it settles.
Are you building during the acceleration? I think about this constantly. If you're in the middle of it, I'd genuinely like to compare notes.
Kenektic is in development and will launch soon. If you want to be notified when we're ready, or if you want to share your story with me directly, reach out at hello@kenektic.com.
Coming Next: How do you test a platform built around real human connection before you have real human users? The answer Opus 4.6 designed — a fully autonomous testing environment with AI-powered synthetic users who have backstories, emotional states, and evolving conversations — is one of the most interesting things I've built. And it produced a moment I didn't expect: a synthetic persona said something to kAI that stopped me cold at my desk. I'll tell you what happened.