Fraud Blocker

Becoming AI Native: How We’ve Adapted Our Product Development Workflows

A reflection from DevLabs by AngelHack, a software development + staff augmentation team with years of experience in product development.

Introduction

This article is part of our new Becoming AI Native series, where we speak with our CTO, product managers, and designers about how they are using AI in their product development workflows, what they learned from it, and what advice they would give to others navigating this shift.

Rather than treating AI as a shiny add-on, this series looks at how companies can become more AI-native: how work is defined, how teams make decisions, collaborate, and where judgement still matters.

In this first piece, we reflect on how our team has adapted its product development practices as AI becomes embedded into how we work, what it accelerates, where it falls short, and how our CTO, Chris Cai, has guided both technical and non-technical leaders through this shift.

What we mean by becoming “AI native”

As AI becomes widely adopted, expectations around speed, efficiency, and cost are shifting across industries. Teams that do not adapt risk falling behind, not because they lack talent, but because their ways of working are no longer aligned with how work is being done elsewhere.

For us, becoming AI native means:

  • Adopting AI into workflows
  • Implementing AI with clear ownership, review, and accountability
  • Adapting processes, roles, and decision-making as AI becomes part of everyday work

Becoming AI-native is not about replacing people or automating everything. It’s about learning how to work effectively in an environment where AI is now part of how work gets done.

How are we adapting our workflows to become more AI native?

AI helps us shape the requirements from day one. We typically spend the first day going back and forth with tools like the BMAD agent to define the problem space, competitor research, clarify what we are building.

This early-stage work is important because it sets the foundation for everything that follows. From there, AI plays a major role in structuring the work by:

  • Breaking high-level features into epics
  • Breaking epics into stories
  • Breaking stories into acceptance criteria

We review each step, but the heavy lifting of structuring this hierarchy is done by AI.

On the implementation side, we use tools like Cursor or Claude Code to generate most of the code.

Kevin Scott, Microsoft CTO, predicted that 95% of all code will be AI-generated by 2030

Right now, our CTO still reviews every single story, but he’s actively testing whether he can trust the agent with larger chunks of work before doing a detailed review.

What has changed most for our developers is what they spend their time on. Instead of writing everything from scratch, our CTO describes how the role of a developer has changed – they are now more focused on:

  • Defining expectations for what the developer agent should do
  • Establishing patterns that make the codebase maintainable
  • Reviewing output

The technical work hasn’t stopped, but shifted upwards, from writing all the code to designing how AI should write code.

What tasks has AI replaced or sped up for us?

Before we implemented AI into our workflows, our product team would manually document the product briefs, notes, and breakdown of steps. Now, most of that is generated by AI and reviewed by our team.

Tasks that are now largely AI-generated include:

  • PRDs (Product Requirement Documents)
  • Epics, stories, and acceptance criteria
  • Contextual documentation that supports implementation
  • Most of our coding

This removes long documentation and repetitive work from the process.

However, this doesn’t mean everything is automated. When our CTO and product managers know they need to communicate something clearly to a wider team, they often still create their own visuals or diagrams. AI helps with text and structure, but shared understanding still requires human judgement and storytelling.

An important learning for us as a team = AI is more of a partner in thinking and brainstorming, rather than a replacement. It helps articulate and formalise ideas, not originate them.

Where has AI fallen short or caused issues?

One of the biggest lessons we’ve learnt is that AI tends to take the shortest path to a goal. We noticed that if the prompt or objective wasn’t detailed enough, AI would confidently choose solutions that technically worked but weren’t production-ready or future-scalable.

In other words, the code might run, but the implementation quality wasn’t where it needed to be. Because our workflow is layered,

Product briefepicsstoriesacceptance criteriaimplementation

whatever comes out of the earlier stages becomes the “source of truth” for the next ones.

If something is slightly wrong in the documentation at the start, AI will treat it as approved reality and build on top of it. That small mistake can snowball into much bigger issues later on.

Becoming more AI-native has actually made us more disciplined about early review, not less. Catching problems early saves a huge amount of time downstream.

To avoid this, the most effective way to work with AI has been to treat it like an assistant or junior developer, something that needs guidance, context, and review, rather than an autonomous builder.

What critical decisions still sit with our team?

Developers

Even with heavy AI use, there are still areas where our developers play a crucial role. Beyond reviewing code and fine-tuning it, one of the most important responsibilities is setting best practices and common patterns.

Compare this to working with a large team of developers. If everyone writes code in completely different ways, the system becomes hard to maintain. AI behaves similarly: without clear guardrails, it tends to pick the fastest solution rather than the most maintainable one.

So a big part of our work is defining universal specifications for the project:

  • How functions should be structured
  • How state should be managed
  • What “clean” architecture looks like
  • How we want the codebase to evolve over time

These decisions rely on experience, context, and long-term thinking that AI doesn’t naturally apply on its own.

Correcting Mistakes

Sometimes, there are mistakes that AI makes that only experienced developers can catch. One concrete example our CTO, Chris, shared was around tracking a chatbot workflow.

The AI designed a system that stored similar information in two places:

  • A field indicating the current step of the user
  • A separate list marking which steps were completed

Individually, both made sense. But later in development, the AI got confused about which should be treated as the source of truth, which led to inconsistent behaviour. The design itself wasn’t wrong, the problem was that the implementation lost sight of why the design existed in that form. An experienced developer would have kept that in mind while coding and ensured a single, consistent reference point.

This is the kind of subtle issue that AI struggles with and that our developers are still essential for catching.

Human Judgement

In addition, AI can give you a perfectly functional product, but it won’t naturally think about whether:

  • The user understands the product
  • The flow feels guided
  • The experience feels intuitive

This layer of product thinking is still very much ours.

Interestingly, this is also why becoming more AI-native hasn’t reduced the importance of product judgement, but amplified it.

How far can a non-technical founder realistically get using AI?

One of the biggest shifts we’ve seen is that the barrier to product development has moved from coding to structuring and communication.

A completely non-technical founder can realistically build an MVP today by:

  • Spending 30–60 minutes learning how to set up a basic development environment
  • Using AI coding tools like Cursor or Claude Code
  • Leaning on services like Supabase for infrastructure and authentication

These services are user-friendly, and can guide you through setting them up step by step.

“Startups using AI-assisted tools are launching functional MVPs in just 2 to 6 weeks – that’s roughly 10 times faster (than before AI adoption).” RapidNative

It really can feel like having a technical engineer on call inside your console.

How do we help teams to adapt to this shift? A Three Step Approach

Step 1: Learn the environment

Get comfortable using an IDE like Cursor or VS Code. You don’t need to be a coder, you just need to understand where things live and how to interact with AI inside the editor.

Step 2: Use a structured framework

We often recommend the BMAD method or Agent OS. These frameworks define clear roles (business analyst, product manager, engineer) and guide you through a structured process rather than letting AI run chaotically.

Step 3: Follow the framework end-to-end

Even if your idea is just one sentence: “I want to build an app to track my fitness goals”, the framework will guide you through:

  • Defining requirements → The BMAD agent guides this step through structured prompts about target users, pain points, and expected behaviour. These inputs are then translated into clear functional and non-functional requirements, providing a solid foundation before breaking work into epics and stories.
  • Creating a PRD
  • Breaking work into epics and stories
  • Implementing a working application

The biggest learning curve isn’t coding, it’s getting used to working inside a console like Cursor. With a few tutorials, most people can get comfortable very quickly.

When does bringing in a developer become necessary?

For building MVP or a demo, it may not be necessary to have a developer.

The inflection point comes when companies start thinking about scale, cost, and sustainability.

A big risk for early startups is underestimating operational costs. If your product relies heavily on AI APIs, every call costs tokens, which translates into money.

At that stage, you need someone who understands both engineering and business trade-offs to:

  • Audit your system
  • Reduce inefficiencies
  • Optimise how AI is being used

This doesn’t always have to be a traditional software engineer, but it should be someone who understands how these systems work and can balance performance, cost, and usability.

What risks do companies underestimate?

The biggest one is cost. AI can make building feel cheap and easy at first, but usage costs can balloon quickly at scale.

Code maintainability is also a risk, but for early MVPs, we don’t think it matters that much unless you’re already hitting massive scale.

One example is case study from Twitch, where the entire front page was managed off a Google Calendar API for years. Imperfect engineering can still work, what matters is being conscious of the trade-offs you’re making. What matters is being aware of the trade-offs you are making as a company.

Conclusion

AI has reduced much of the tedium in documentation and implementation, but it has made clarity, guardrails, and early review more critical than ever.

Becoming more AI-native hasn’t just changed how fast we build, but how we organise work, assign responsibility, and make decisions as a team.

Our developers now spend less time writing code and more time setting standards, shaping systems, and ensuring what we build is maintainable and meaningful. At the same time, AI has lowered the barrier for non-technical leaders to engage in product development, as long as they approach it with structure and care.

In the next article of our Becoming AI Native series, we will dive deeper into how we train non-technical founders to build with AI. We will unpack our three-step approach in more detail, walk through practical examples, and highlight common pitfalls so others can move quickly without losing clarity or control.


Is Your Startup/ Company Thinking of Becoming AI Native?

Book a consultation with us for free for some advice on becoming AI native.

Book a Free Consultation