“Software — the most powerful medium of all — has remained locked behind code, complexity, and gatekeepers.”

“[Software was] Made for people, then by people.”


Eugenia Kuyda recently argued that the world’s twenty million developers were “gatekeepers” of software. Until last year, she claims, only professional developers could make software - “very few people in the world.” AI, by implication, broke the gate.

Most of the global population cannot ship reliable, secure and scalable software (even with AI people get confused going from localhost to a domain). Now AI tooling has lowered the practical floor for producing code that runs. That’s true. And if the claim were “AI has reduced the time-cost of producing functional software”, it would be defensible. However, Kuyda said something far more abrasive that fundamentally misconstrues the industry and culture that brought her software in the first place. If people didn’t take the time and effort to become software engineers and develop software there would be no engineering and assembly of it. Software as a category wouldn’t exist. In fact, the hardware in Kuyda’s laptop couldn’t boot up and communicate succinctly without firmware.

The claim she made is that the previous state of affairs constituted gatekeeping - that some people or class of people (software engineers) was actively keeping people out. That word is doing the work in the argument, and it is the wrong word.

This essay takes the framing seriously, grants the empirical premise that software production was hard, and shows why the gatekeeping frame is wrong on its own terms - wrong about what existed, wrong about what changed, and wrong about what the change implies.

The Slogan Fails Both Readings

Made for people, then by people. The chiasm is what makes the line memorable, and it is what conceals two different problems. The historical reading is false. The literal reading is incoherent. I mean, who made it for people in the first place? People. Unless Kuyda thinks SWEs aren’t people.

Let’s start with the history, because Made for people treats fifty years of computing as a one-way broadcast - professionals producing software, users consuming it, with a fixed line between them.

  • VisiCalc shipped in 1979. Excel followed in 1985. By any reasonable measure the spreadsheet is the most widely used programming environment on earth, almost entirely operated by people who do not call themselves developers.
  • HyperCard arrived in 1987 under the tagline programming for the rest of us, and BASIC and AppleScript were designed from the start for users, not engineers.
  • WordPress, Wix, and Squarespace have spent two decades putting non-developers in charge of the software that runs their public-facing presence on the internet.
  • Game engines like Unity and Unreal have incredible no code capabilities.
  • Bubble, Retool, Zapier, Airtable, n8n - the no-code wave is more than a decade old, and multi-billion-dollar companies have been built on the premise that anyone should be able to assemble functional software without writing it.
  • Roblox and Minecraft script communities, mod ecosystems for any game with a healthy long tail - millions of users producing software in environments designed to make production accessible.

The “made for people” frame is only true if the category is drawn narrowly enough to exclude all of this. It requires defining “software” as the kind of thing professional engineers ship at companies, then noting that professionals ship it. The participatory history of computing is collapsed into a top-down one to make the discontinuity look sharper than it is. The gatekeeper line is the same move in louder rhetoric.

Now, Then by people. When a user types a prompt and an LLM emits a working application, who is doing the making? The user described what they wanted. The model - trained on the open-source work of the same twenty million developers Kuyda is dismissing - produced the artifact. The accurate description is commissioned by people, generated by a model trained on the work of professionals.

Software Does Not Come from Thin Air

The slogan’s deeper failure is what it reveals about Kuyda’s mental model. Made for people, then by people only works as a chiasm if software is the kind of thing that arrives in the world without producers. That is the assumption underneath the slogan, and LLMs make the assumption easy to hold. A user types a prompt and the artifact appears. The production looks costless.

However, the costs were paid earlier, in places the user is not asked to look. The model was trained on the open-source corpus the twenty million developers Kuyda dismisses chose to publish. The cloud the generated app runs on was built by infrastructure engineers who do not get a credit. The security patches the model would otherwise have shipped were written, tested, and merged by people who spend their careers on exactly that work. The user is at the end of a long chain of production, and the chain is the part that determines whether the artifact is functional, safe, or merely plausible.

Quality, security and scalability do not migrate to the a visionary’s prompt. They live in the substrate. A non-developer prompting an LLM gets a thing that runs. The thing that runs is not the thing that holds up under load, against an attacker, or in production for two years, or in a dynamic market for that matter. The gap between runs and holds up is where the producers Kuyda dismisses have always lived.

The cost is also being mispriced right now. Frontier model inference is sold below what it costs to provide; the gap is filled by venture subsidy, stable only until the next round prices it differently. Beneath the price are constraints the slogan does not name:

  • data centre power, which is increasingly the binding constraint on capacity expansion across the United States and Europe;
  • foundry capacity concentrated in a handful of fabs in two countries;
  • the specialised expertise to design, train, and operate frontier systems, concentrated in a few hundred people across a handful of labs. The production chain behind “then by people” is narrower and more concentrated than the one it claims to replace.

Kuyda’s pitch deck slogans promotes an illusion that software is akin to the climate. When really, the collision of software and AI will feel more like series of radical weather changes that result in storms, droughts, blizzards, floods, fire.

A Gate Twenty Million People Walk Through

Gatekeeping requires a gatekeeper - an agent or class who decides who passes. Software engineering, historically, has had effectively none. There is no licensure body, no guild, no admission test, no professional approval required to write or ship code. The discipline is, by professional culture, the most aggressively open field in modern technology. Source code is published. Documentation is free. Tutorials, debugging guides, library reference, and the entire infrastructure of personal blogs, documentation, Stack Overflow, GitHub, and YouTube exist because practitioners have spent thirty years choosing to externalise what they know. Compare this to law, medicine, finance, accounting, or any traditional craft, and the asymmetry is severe. Seriously, I can’t believe Kuyda has seemingly overlooked this. She reads more like someone who is annoyed that she ever had to pay an engineer to build something, not realising she does her payroll on her laptop, through website, in a application - all of which sits on a web complexity with regards to specification, implementation, security and compliance no single person can fathom.

Twenty million people are inside the discipline. The framing requires that we treat this number as an exclusionary outcome rather than the visible result of a self-selected population willing to spend the time. The arithmetic alone defeats the claim. A gate that twenty million people pass through is not a gate. It is a filter for time and effort, which is what every skill is - and software engineering’s filter is among the most permeable in the modern economy.

Test the framing on another domain. Farmers for example. Are they gatekeeping food? Nobody would say it. Farmers are producers. The food on shelves is the artefact of their work, not a barrier between consumers and a thing they’d otherwise have. Calling producers gatekeepers is a category error.

The further irony is that the AI tooling Kuyda credits with breaking the gate was built by software engineers and trained on the open-source corpus those same engineers chose to publish. The democratisation she points to is a derivative of the openness she implies was missing.

More Software Is Not More Good Software

The unstated premise of the AI-democratisation argument is that the constraint on useful software is the number of people who can produce it. This is wrong in a way that is observable. Software has been in quality decline for years. The constraint was never headcount. It was attention, taste, and integration with real-world systems whose costs and feedback loops cannot be hidden behind a generated function.

Acceleration of production does not produce more good software. It produces more software. Every additional unit competes for the same finite attention from users, the same finite human review for security, the same finite operational capacity for maintenance. There is a coherent case that producing more software faster, by people with less context for what makes software fail, will reduce the average quality of deployed code while increasing its volume.

AI Competence Is Partly an Artefact of the Reader’s Ignorance

Kuyda’s confidence in AI-produced software depends on the reader’s confidence that the software works. That confidence is unreliable in a specific way: When a user evaluates AI output in a domain they understand, they see the cracks immediately - the subtle wrongness, the missing edge cases, the architecturally suspect choices. When the same user evaluates output in a domain they do not understand, the same cracks are invisible, and the output reads as competent.

This is the Gell-Mann Amnesia pattern transposed onto code. The most dangerous case is the founder who does not write code, watching AI produce something that looks like working software in a domain they cannot independently verify. The output is then deployed against real systems - payments, medical records, identity - where the cost of subtle wrongness is not “ugly code” but money lost or harm done. The democratisation framing implicitly treats software production as a uniformly low-stakes activity when it’s not.

Just because your machine ‘just works’, and your password is a combination of your name, birth date and pet’s name, doesn’t mean software is easy and fun to produce. It took years of experience of the industry to get to this point. Securing ‘just works’ and not resulting in identity theft is going to take even more going forward.

Even the Models Care About Code Quality

A stronger version of the AI-democratisation argument sometimes runs: code quality is a human aesthetic concern; the model does not need clean code to function. This is empirically false. Large sprawling functions degrade an LLM’s ability to manage context. Files that exceed the model’s effective working memory produce worse edits. The leaked Claude Code source, with its thirty-two-hundred-line function and twenty-eight nested closures, is a worked example of what happens when the constraint is ignored: the next agent tasked with maintaining the code performs worse, and the human review that might have caught the structural problems was skipped because the assumption was that the model handles it.

Code quality is not a human preference imposed on machines. It is a load-bearing constraint on whether the system remains modifiable - by humans or by models. The reassurance that AI-generated mess is fine because AI can clean it up dissolves on inspection.

What the Defensible Version Actually Says

AI tooling has reduced the time-cost of producing functional code, and this expands the population of people who can ship something that runs. It is also unremarkable - every prior generation of tooling did this, from compilers to IDEs to high-level languages to package managers. The history of software is the history of the floor rising. AI is the latest move in a long sequence. And if it exceeds that, humans will be so far out of the equation, that Wabi’s already poor value proposition wouldn’t stand at all.

What is novel is the rate. What is not novel is the structure: an easier floor has never produced more good software, only more software, and the gap between “runs” and “works” widens as the floor rises. The cost of producing code that compiles has fallen. The cost of producing code that works in a real system has not fallen by anything like the same amount. The cost of acquiring the judgement that distinguishes the two is unchanged.

I mean what are LLMs going to be trained on in future? The human approved slop it’s generating now if we aren’t extremely careful. Hey, you know what, we should produce incentives for people to take care of the generation and curation to ensure we have quality and not slop. Unfortunately with the way things are going, the people who can do a good job at both will become more and more rare as Junior and Mid level engineers get less of a chance to master their craft.

The Diagnosis: Mistaking Ease for Universality

The deeper failure of the gatekeeping claim is psychological. Founders who use AI tooling fluently (or so they think) often attribute the fluency to the tool. The reasoning runs: I produce software easily with AI; AI is the variable that changed; therefore anyone with access to AI can produce software easily.

This is a recurring failure mode. Either people did the hard work of acquiring the context and forgot what it cost, or they were privileged with circumstances that compressed the cost - capital, mentorship, time without competing demands, early access to opportunities - and have mistaken privilege for the natural state of the world. The “anyone can do this” claim is then exported as universal advice, and the people it reaches discover the gap between the founder’s experience and their own only after they have committed time and money to the framing. The survivors in Surviorship Bias forget they are survivors and misunderstand how entropy, competition and markets work.

The corrective is not to deny that AI changes anything. It is to name what it changes, accurately, and to refuse the moral framing that smuggles in a verdict the evidence does not support. There were never gatekeepers in software engineering. There was a discipline that gave away nearly everything it knew and asked, in return, only that you spend the time. The people who didn’t take it up made a choice. Calling that choice a gate is a story that flatters everyone except the people who actually built the field.

Commissioning Is Not Producing

The gatekeeper claim is not only a misreading of an industry. It is the public version of a pattern that runs quietly inside companies — the founder pattern of taking credit for work the founder did not do, then retracting the admission when the framing turns.

Kuyda has spent two ventures commissioning software, not producing it. Generalizing her experience to anyone can produce software requires conflating them - a conflation universal among founders who have never been on the other side of the commissioning relationship. They cannot see what they have never lacked. The same conflation produces DJs gatekeeping music, photographers gatekeeping images, and publishers gatekeeping writing - all not understanding that someone invented vinyl, cds, .mp3, - cameras, film, .jpeg - paper, the typewriter, tty, keyboards and computers.

The conflation of commissioning and producing good software, also enables quieter narrative that runs internally. The team built X compresses to I built X as conversational shorthand, and over some time the shorthand becomes the self-concept. Once you believe you made it, the engineers stop being authors - they become tools you used. Once they’re tools, anyone with a similar tool is in the position you were. This is the move that lets a person who currently employs engineers tell users they don’t need engineers, sincerely. The people doing the work have been edited out of her internal story long before she exports the story to the public.

Kuyda is doing this move at a wider angle. Replika was built by engineers (hopefully) she hired because she was not one. Wabi is being built the same way. The class she now calls gatekeepers is the class she has been paying, for the capability she does not have herself, for two companies in a row. The hire of a specialist is the founder’s confession that the capability was not theirs. The retraction comes later, in a pitch deck or a layoff memo, where the work is reframed as fungible, the engineer as replaceable, the original need as a luxury the company once indulged. The “Made for people, then by people” framing is the cleaner version of the same retraction - it implies a future in which the people who actually built the field were never the operative ingredient. The pitch flatters the audience by erasing the fact that they are incumbents of an incredibly driven and generous field of people.

None of this requires unusual cynicism. Sincere belief in a framing like Kuyda’s is a common case, and is part of why founders who hold it succeed at fundraising. It doesn’t mean they know what they are talking about though.