Sungat Arynov RU
Sungat Arynov

Your tech stack is your cognitive prison.

RpSnLWQsDY3nupRZq63hndFrDOl1jRacenAhOLZx

In the article «I Couldn't Answer Algorithm Questions in Interviews. But I Had Working Products» I wrote about the "cognitive cockroach" strategy and the idea: the tool is a consumable, thinking is not. But one question remained open. Why do developers cling to one stack? Why does a Java engineer see factories in every task, while a 1C developer breaks down APIs into registers? It turns out, linguists described this mechanism long before programmers. The language you think in defines the boundaries of what you are capable of conceiving. This article is about walls we do not see because we look from the inside.

In the language of the Pirahã tribe from the Brazilian Amazon, there are no words for numbers. None at all. There is "few," "more," and "many." When researcher Daniel Everett asked the Pirahã to set aside as many stones as there were nuts in a row, they could not manage even with five items. Not because they are stupid. Their brain is the same as that of an MIT graduate. But their language does not contain tools for precise counting, and without these tools, the cognitive apparatus simply does not form the corresponding ability.

In the 1940s, linguist Benjamin Lee Whorf and his mentor Edward Sapir formulated the hypothesis of linguistic relativity. The structure of the language you speak influences how you think and perceive reality. If your language uses one word for blue and green (as in Japanese until the 20th century, where "aoi" covered both colors, or in Kazakh), your brain distinguishes these shades more slowly. The studies by Kay and Kempton confirmed this experimentally. Speakers of languages with different color categories literally see the world differently. Language is not a tool for describing reality. It is a filter through which you perceive this reality 🧠

The hypothesis exists in two versions. Strong: language completely determines thinking. Weak: language influences thinking but does not entirely constrain it. The academic community supports the weak version - hundreds of studies since the 1990s have confirmed that linguistic categories do indeed influence cognitive processes.

Now let's take this hypothesis out of the philologists' offices and move it to where it works every day - in the IDEs of millions of developers.

🔨 Maslow's Hammer in the World of Code

Abraham Maslow once said: "If the only tool you have is a hammer, everything around starts to look like nails." In psychology, this is called the Law of the Instrument. Nearby is the Einstellung effect - a cognitive bias where a familiar solution blocks the search for a more optimal one. Abraham Luchins described it in 1942 in a series of elegant experiments.

Participants were given three jugs of different capacities and asked to measure a specific amount of water. The first five tasks were solved using the same complex algorithm: fill jug B, pour it into jug A, pour jug C twice. It works, participants remember it. The sixth task was solved simply - just add A and C. But most participants continued to apply the old complex formula. The simple solution was right in front of them - and they didn't see it. Literally: the brain didn't process the alternative because the familiar path was activated faster.

When Luchins simply told participants before the sixth task "Don't be blind," more than half suddenly found the simple way. One phrase was enough to break the mental set. Remember this - we will return to this idea later.

Transfer the experiment to programming - and you get an accurate picture of what happens after several years of working with one stack. The tool you write on forms not only code. It shapes the way you think about problems. Your favorite framework is not a set of commands. It's your vocabulary. And if this vocabulary is limited to one syntax, you gradually begin to see the whole world through its prism 🔥

🧩 The Blub Paradox: Why You Don't See the Walls of Your Cage

Kenneth Iverson, the creator of the APL language, was one of the first to apply the Sapir-Whorf hypothesis to programming. His 1979 Turing Award lecture was titled "Notation as a Tool of Thought." The key thesis: more expressive notation not only simplifies writing. It expands the space of thoughts that are possible at all. A language that lacks convenient notation for matrix operations not only complicates matrix calculations. It makes them unthinkable in the literal sense - the brain won't go in that direction because it lacks the cognitive tools for it.

Twenty years later, Paul Graham, the founder of Y Combinator, developed this idea in the famous essay "Beating the Averages" and described the Blub paradox. Let's take a hypothetical mid-level language - Blub. A Blub programmer looks "down" at simpler languages and clearly sees: they are weaker, lacking features he is used to. "How can you even work with this? It doesn't even have X!" But when he looks "up" at more powerful languages, he doesn't realize he's looking up. He sees "strange languages with unnecessary frills." Unnecessary complication. An academic toy. In Graham's words: "Blub suits him because he thinks in Blub."

The paradox is that you cannot assess the power of a tool you've never used. How to explain the concept of closures to a C programmer who has never worked with higher-order functions? For him, it's an "unnecessary abstraction." Not because he's lazy. But because his cognitive apparatus doesn't contain a category into which this concept could fit. Like the word "blue" for a people whose language has the same word for blue and green.

A developer from Chewxy's blog (creator of the Gorgonia library for Go) described how the Blub paradox blinded him when designing interfaces in Go. Go's syntax for methods with receivers - func (r receiver) methodname() - was so reminiscent of OOP that the brain automatically applied OOP patterns to everything. He was stuck on a solution dictated by habitual thinking. The breakthrough came only when he asked himself: "How would I solve this in Haskell?" Not because Haskell is better than Go, but because the question forced the brain to leave the familiar cognitive corridor and consider an alternative that was literally invisible from the Go perspective.

IC6kkWP2iR7OLxCITnFAdPWqqO9RSf3WA3tpURMc

🪟 How it looks in the wild

I have been working in IT for 11 years and during this time I have used dozens of tools and technologies. Not because I can't choose and not because I'm chasing hype. Each tool is a consumable, but thinking is not. And thanks to this position, I was able to observe the same pattern in very different teams and companies.

1C-world. The 1C:Enterprise platform is a powerful tool for accounting and business automation. But it forms a very specific mindset. Once I observed a 1C developer with 12 years of experience being tasked with creating a simple API for a mobile app. Accept JSON, process it, return a response. His first reaction: "We need an information register, a document to record the request, processing for routing..." He couldn't conceive the task outside the categories of his platform. Not because he was incompetent - within 1C he was a virtuoso. But his "language" didn't contain the words to describe a REST endpoint that simply accepts a request and returns a response. Like the Pirahã with stones: the cognitive apparatus is the same, the tools are different.

Java-enterprise. A classic story: a startup hires a Java developer from a large bank. The task is to create an MVP of a product catalog. Simple CRUD, a couple of pages, a database, search. A month later, the repository contains 47 classes. AbstractProductServiceFactory. ProductRepositoryStrategyImpl. CatalogItemValidationInterceptorChainBuilder. Each class contains 10-15 lines of actual code and 40 lines of boilerplate. The developer sincerely believes he is doing the right thing - in his world, "clean architecture" means exactly this. He wasn't taught that for an MVP catalog, one file with two functions is enough. He was taught that if there's no factory, the code is "dirty." Language defines thinking 😅

React-universe. A frontend developer, five years on React. A task comes from the business: "On the product page, we need a 'Buy' button that adds the product to the cart." What the business sees: a button, a click, the product in the cart. What the React developer sees: a CartContext with useReducer for state management, a custom hook useCart for logic abstraction, a separate CartProvider at the application level, memoization through useMemo to avoid unnecessary re-renders, and of course, a separate AddToCartButton component with its own loading state. The task "a button that puts the product in the cart" turns into 8 files and 300 lines of code. Meanwhile, the user still needs the same thing: press the button - product in the cart. The focus has subtly shifted from the user's pain to the elegance of the internal implementation.

Kubernetes-thinking. A separate phenomenon. There is a category of engineers for whom k8s has become not a tool for orchestration, but a philosophy of being. A startup, three users, an MVP for hypothesis validation. The first question such an engineer asks: "How many nodes are in the cluster?" The second: "Do we need Istio for service mesh?" The third: "Which Ingress Controller will we choose?" The fourth question - "What problem are we actually solving?" - is never asked. Any project is automatically decomposed into microservices with helm charts because the "language" of the engineer does not provide for any other description of the task. I have seen a team of two people spend three weeks setting up a Kubernetes cluster for a service that would live perfectly on a single Docker Compose with two containers.

In all these cases, one mechanism works. According to a 2018 study by the University of Chicago: although all programming languages are formally Turing complete (everything that can be expressed in one can be expressed in another), the ways they support problem-solving and imagination differ fundamentally. The formal equivalence of languages does not mean cognitive equivalence of their users 🧠

🧪 Semmelweis Reflex: why a simple solution causes resistance

There is another layer. When a "narrow" specialist is offered a radically simpler solution, the reaction is often not curiosity, but defense.

In 1847, Hungarian doctor Ignaz Semmelweis noticed that in the maternity ward where doctors worked, the mortality rate was 10-35%, while in the ward with midwives it was 2%. He hypothesized that doctors were transferring "cadaverous particles" from the anatomy theater and suggested washing hands with a chlorine solution. Mortality dropped to 1%. The result was obvious, the data irrefutable. The reaction of the medical community: harassment, dismissal, death in a psychiatric hospital. Colleagues could not accept an idea so simple and yet so destructive to their worldview.

In IT, harassment is not usually arranged. But the mechanism is the same. Show a React developer that a specific task can be solved in an evening with vanilla HTML and a couple of lines of JavaScript. Show a Java architect that his microservice with 47 classes can be replaced by a single Go file with 200 lines. Show a Kubernetes engineer that for this project, Docker Compose on a single server for €9 a month is enough. In all three cases, there is a high probability of encountering not interest, but an argued defense of the status quo. "It doesn't scale," "It's not production-ready," "What if there are 10 million users?"

It is important to note: this is not stubbornness or stupidity. It is cognitive self-defense. A person who has invested five years in mastering a certain toolkit perceives the devaluation of this toolkit as the devaluation of their own experience. The stack stops being a set of tools and becomes part of one's identity. "I am a 1C developer" sounds not like a description of a current skill, but as a description of who you are. The proposal to abandon 1C = the proposal to abandon part of oneself. Hence the emotional reaction to what seems to be a technical solution.

The study by Bilalic, McLeod, and Gobet in 2008 on chess grandmasters demonstrated this mechanism directly. Experts were given a task with two solutions: the familiar (long) one and the optimal (short) one. Eye movements were recorded with an eye tracker. The result: even when the masters said they were looking for the best solution, their eyes kept returning to the pieces associated with the first familiar move. The brain filtered alternatives unconsciously, even before the player could realize them. You do not see the walls of your cell not because you are blind. But because your brain is actively hiding the exit from you 🔥

RuXsxE6qLOE3aeX1ZtHdGPp8eZAemJc6qQHcNyko

🎯 When the user doesn't care about your stack

Over 11 years, I worked with pharmaceuticals, retail, logistics, fintech. In all cases - the same picture. Businesses and users fundamentally do not care what pattern is under the hood. The user needs to: log in, press a button, get a result. If the page loads in three seconds - they will leave. They are not interested in the code being "architecturally clean." They are interested in everything working.

But a developer, living within their stack, often substitutes the task. Instead of "solving the user's problem," they solve "writing ideologically correct code within their framework." In behavioral economics, this is called Action Bias - the bias towards action. A person writes thousands of lines of boilerplate and complicates the architecture not because the product needs it, but because it creates a sense of productivity. "I wrote so much, created so many classes, built so many abstractions - therefore, the work is done."

I remember a project where the backend team spent three weeks designing the "correct" caching system with invalidation, TTL, L1/L2 layers. The problem they were solving: the catalog page loaded in 4 seconds. The solution that ultimately worked: one SQL query was rewritten, an index was added. The load time dropped to 200 milliseconds. Three weeks of caching architecture turned out to be an attempt to solve a problem that did not exist. But within their "language" (microservices, Redis, multi-layered caching) there could be no other description of the problem 🛠️

🚪 How to expand cognitive space

Remember Luchins' experiment, where one phrase "Don't be blind" broke the mental set in half of the participants? Awareness is the first and most powerful tool.

Ask yourself "How would X solve this?" Developer Chewxy described how the simple question "How would I solve this in another language?" shifted his thinking from a deadlock when designing in Go. You don't need to be an expert in Haskell to ask: "What if I approach this task functionally?" You don't need to know Erlang to think: "What if each request is a separate process?" The question itself is already therapy. It forces the brain to step out of the usual corridor and consider alternatives that the "language" of the current stack does not describe.

Try new tools on small projects. Not for the resume and not for changing jobs. For expanding the cognitive vocabulary. Microsoft developer Dave Remy wrote about how working with C#, Haskell, ML, Perl, and Ruby gave him "orthogonal perspectives on problem-solving" - each new language did not replace the previous one but expanded the set of mental tools. Write a pet project in Rust if you've been coding in Python all your life. Try Elixir if your world is Java. Not to "become a Rust developer." So that next time, when faced with a task in Python, you see not one familiar solution, but three.

Communicate with people from other stacks. Research has shown that groups solve problems more effectively than individuals because each participant brings their own "linguistic" experience. A frontender, backender, mobile developer, and devops at one table see four different tasks in one specification. And that's not a problem. It's a superpower.

Start with the question "What problem are we solving?" rather than "Which framework will we choose?" It sounds trivial. In practice, it turns the decision-making process upside down. When the team starts with the problem, the tool becomes a consequence. When it starts with the tool, the problem is deformed to fit the tool. It's like the Pirahã, who try to count stones with the words "few" and "many" - the task is numerical, but the language doesn't allow it.

That's why I am currently writing primarily in Go and developing my projects on it. The minimalism of Go's syntax doesn't allow you to hide behind the magic of heavy frameworks. You don't have hundreds of built-in abstractions forming a certain type of thinking. You are forced to think about what goes into the system, what comes out of it, and how it solves a specific business problem. The tool takes a back seat, and the product logic comes to the forefront ⚡

OEr6M1OzwfmfUn0m66E3lrTDopbTxWbzjprSYKpL

🤖 AI agents and the end of linguistic prisons

The era of AI adds another layer to this story. Neural networks are polyglots by definition. They have no Sapir-Whorf hypothesis. No neural pathways reinforced by five years of one framework. No identity tied to a stack. No Semmelweis reflex. LLM will write a solution in any language, in any paradigm, without cognitive resistance.

And this is where it gets most interesting. An engineer who thinks in terms of "I am a Java developer" finds themselves in direct competition with a neural network. Because LLM is also a "Java developer." With perfect memory for APIs, zero context-switching time, and no coffee breaks. But an engineer who thinks in terms of "I know how to deliver value to the business, and what syntax LLM generates it in is secondary" takes a position where there is no competition with AI. Because understanding business context, architectural trade-offs, and user pain is not described by the syntax of any programming language. It's a meta-level that stands above any stack.

According to the Stack Overflow Developer Survey 2025, 82% of developers already use AI tools weekly. AI has taken over the routine and boilerplate work - exactly the work that created the illusion of productivity within one stack. What's left is what AI can't do yet: see the system as a whole, understand why something is done, not just how, and make decisions in conditions of uncertainty 📊

One engineer with AI tools today performs the work that a team of three people used to do. But only if this engineer is not confined within a single stack. Because AI enhances the thinking you already have. If your cognitive apparatus sees the world through the prism of React components, AI will help you write React components faster. If your cognitive apparatus sees a business problem and can decompose it at any level of abstraction, AI becomes a multiplier of a completely different scale.

❌ What NOT to do

Do not tie your identity to a tool. "I am a PHP developer" or "I am a Java developer" is not a profession. It is a description of the current skill set. The profession is "I solve business problems using technology." Which specific technologies depend on the task, not the "skills" section in the resume.

Do not reject simple solutions because they are "not serious." If a task is solved with a 20-line bash script, that is not a reason to write a microservice. If a landing page can be made in plain HTML, that is not a reason to pull in Next.js. Simplicity is not the absence of professionalism. It is its highest form. Remember the Luchins experiment: the complex solution surfaces first not because it is better, but because the brain is accustomed to it.

Do not confuse depth of tool knowledge with understanding the problem. Knowing all React hooks by heart is not the same as understanding why a user came to the site. Knowing all kubectl parameters is not the same as understanding whether your project needs Kubernetes at all. The first can be automated. The second cannot.

Do not ignore AI tools. Rejecting Claude Code or Cursor "on principle" is the Semmelweis reflex. New information destroys the familiar worldview - the brain rejects it. The result is the same as Semmelweis's colleagues: being right does not save from obsolescence.

Do not avoid foreign technologies. Every new language you try, even at the pet project level, expands your cognitive space. You don't need to become an expert in Haskell to start seeing patterns invisible from the world of imperative programming. It's enough to try.

VuFSObyAZR0a1weaGnoGPXq6NsYdCx6KshLp1IrG

🤔 Counterargument

One might argue: deep specialization is still necessary. Someone must know the internals of the Linux kernel. Someone must be able to optimize SQL to microseconds. Someone must understand the intricacies of shaders. And this is true. An engineer designing a database engine cannot afford a "broad but shallow" approach. For a certain class of tasks, narrow expertise is a necessity.

But there is a fundamental difference between conscious specialization and an unconscious cognitive prison. An expert in PostgreSQL who chose PostgreSQL for a specific project because they understand when a B-tree index is better than a hash index for a particular business case is a specialist who sees the context. A person who answers any task with a solution from PostgreSQL because they have tried nothing else is a prisoner of their stack. Formally, both are "PostgreSQL specialists." In practice, they are two different types of thinking. One chooses the tool. The other is tied to it.

✨ Syntax is expendable. Thinking is not.

The Pirahã tribe cannot count not because they have a different brain, but because their language lacks the tools for counting. A Java developer sees factories in every task not because factories are the best solution, but because their "language" lacks descriptions for simpler solutions. A React developer turns a button into eight files not because the task requires it, but because their cognitive apparatus does not describe the button otherwise.

The Sapir-Whorf hypothesis in the world of programming works not at the metaphorical level, but at the level of neurophysiology. Luchins' experiments with jugs, Bilalić's eye-tracking of chess grandmasters, Paul Graham's Blub paradox, and the University of Chicago's research on the influence of programming languages on thinking all describe the same mechanism. Familiar patterns are reinforced through repetition and begin to filter reality. This is not a bug of the brain. It is a feature that saves energy in a stable environment, but becomes a trap in a changing one.

Frameworks come and go. The graveyard of "the only true" tools grows every three years. A programmer who has changed five frameworks has not lost five years. They have gained five different perspectives on the same problems. Each tool left in their cognitive apparatus not syntax (which will be forgotten in six months), but a pattern of thinking (which will remain forever).

Stop worshipping syntax. The tech stack is expendable. Thinking is not 💀

Sources: Sapir & Whorf, Linguistic Relativity (1940s) · Luchins (1942), Mechanization in Problem Solving · Iverson (1979), Notation as a Tool of Thought · Graham (2001), Beating the Averages · Bilalić, McLeod & Gobet (2008), The Mechanism of the Einstellung Effect · Kay & Kempton, Color Categories and Linguistic Relativity · Chen (2018), Linguistic Relativity and Programming Languages · University of Chicago (2018), Computer Programming Languages Can Impact Science and Thought · Reznikov (2025), How Programming Languages Impact Mindset · Stack Overflow Developer Survey 2025

Reactions
Discuss in Telegram Ask @ginkidaBlogBot about this post

What do you think?

Be the first to share your thoughts

0/2000

Recommended for You

Loading next post...
Preparing next post...
You've reached the end! This was the last post.

Subscribe to updates

Join 154.9K+ readers. No spam — only the best articles.