
Gen AI is on the up!
It’s been an amazing few years watching the rise of Generative AI (Gen AI). Platforms such as ChatGPT, Claude, and Perplexity have become mainstream tools, and we are becoming accustomed to seeing AI-generated summaries in search engines like Google. Use of AI does tend to be more pronounced among people invested in AI, but the general trend seems to be that Generative AI is becoming a part of daily life for millions of people around the globe.
There is also a scramble by organizations hoping to leverage this powerful new technology, with the majority of companies launching GenAI programs and products. It’s worth noting that most companies have yet to see a discernible increase in earnings, but the potential benefits are alluring, as is the risk of ‘Missing out’. Government organizations are also gaining momentum, with an uptick in Gen AI pilots in the US and a recent lowering of barriers for the procurement of AI resources. Nonprofits are steadily embracing AI—85% are already exploring generative tools like ChatGPT, 24% are actively beginning implementation, and only 26% haven’t started yet.
PIA’s AI-powered strategy starts with building a foundation
Our first port of call has been to develop pipelines to provide data any AI can work with, as well as tools for AI to retrieve this information across a wide range of US public data sources. We’ve avoided dumping time and effort into developing chatbots for now. While they’re popular and can generate buzz, they are also complex to support and can sometimes compete with commercial products like Chat GPT. Instead, we use AI only where it adds real value for our communities, taking a measured approach to ensure it remains sound and responsible.
AI underpins many of our initiatives, sometimes in unassuming ways. We launched the Recommendations Spotlight to make it easier to explore oversight recommendations issued by the U.S. Government Accountability Office and Offices of Inspectors General. The Spotlight includes data augmented with AI-predicted classifications as well as an AI search engine. And just like the organizations mentioned above, we have a suite of state-of-the-art prototypes that we test with our user community to ensure they add demonstrable value and can be trusted. Not all of these will be successful, and that’s expected. What matters is learning quickly from feedback and moving on to what works.
We use AI to build AI
There are a few areas where PIA is all-in on AI. All of our data pipelines, search integration, testing, monitoring, infrastructure deployment pipelines, and web apps have been developed with AI assistance.
As a small non-profit startup, the effect of using AI for technical development has been profound. The work done so far would previously have required several teams, with expertise spanning DevOps, full-stack web development, database management, data engineering, data science, AI, infrastructure, authentication, and even WordPress plugins for good measure.
Lines of code aren’t a rigorous measure of being ‘Good’, but just to get an idea of activity, we have developed about 72,000 lines of production code.

PIA Lines of code developed with AI-assistance in the last 6 months. Analyzed using cloc.
This volume of new code isn’t actually a lot for many organizations with teams of developers, unless we consider …
All this was done in under 6 months by one developer.
The above code is live and being used, from PIA’s data engineering pipelines that ingest data across a range of public data sources covering hundreds of thousands of reports and articles, to web applications like our AI search interface, deep research agents, and interactive tools on PIA’s website.
How is this possible?
AI for Software Development … Got a LOT Better
As Matthew Harris, our Director of AI & Innovation, noted recently, the last 6 months have seen a dramatic improvement in capabilities for using AI to generate software. Not a new concept, tools like GitHub Copilot have been around a few years now and are used extensively by developers, it’s the quality that has improved, mainly due to supporting tools like Cursor. These tools have gone beyond suggesting minor edits to being able to take a user’s request and generate full software applications, run and debug them automatically, generate tests, and even deploy to cloud infrastructure. They can perform security reviews, code refactors to make it easier to support, and much more.
The human is still absolutely in the loop, and an experienced (human) developer is crucial, but AI has removed much of the manually intensive work as well as providing a boost for the developer in areas that may be outside of their immediate expertise.
This is where the AI revolution is underway and truly isn’t hype. At PIA, we use AI every single day for solutions that are already live, and it’s allowed us to do so much more with less.
The Tool We Use: Cursor
There is now a dizzying array of tools to choose from for AI-powered software development, and some of the main platforms are captured in the diagram below.

Diagram source: “AI Coding Assistants Landscape (03/2025)“ by Bilgin Ibryam, The Generative Programmer (March 2025).
At PIA, we have resolved on using Cursor for the bulk of our development, powered by the Claude Sonnet 4. [ Disclaimer: We have no association with Cursor.ai ]. It is a really clever software development environment that allows the developer to ask an AI agent to write code. Its power lies in the easy-to-use interface as well as AI and includes excellent visual tools to show the developer what has changed, how to accept/reject proposed changes, and roll back to previous iterations. It has become amazingly powerful, especially when set up to run unit tests for every requested change, such that the AI automatically confirms its own work.
The developer can ask the AI to create an application or adjust an existing one by providing a set of functional and technical requirements, and the AI will generate accordingly.
Some Lessons PIA Has Learned about AI-assisted Software Development
AI-assisted software development is not magic (yet!). For the best, reliable, secure, and trustworthy results, the developer needs to make requests that guide the AI in what tech to use, as well as ensuring the generated result adheres to standards and anticipates common pitfalls. Like a human, AI doesn’t work terribly well unless there are clear specifications.
Here are a few tips we have found to give good results:
- Provide detailed guidance on libraries and techniques to be used
- Provide the standards that the result must be compliant with
- Provide web links that provide more detailed context
- Ideally, provide GitHub or other example code to show the required approach
- Where possible, request that the AI run and test the code to fix bugs
- Be clear on how the code will be run, for example, by using Docker
- ALWAYS require comprehensive unit tests to be generated for any new feature
- ALWAYS require the full library of unit and code quality tests to be run on every request
- Periodically ask the AI to review tests to ensure they actually test the real code
- Provide clear examples of security requirements from the start
- Request each change result in updates to the documentation for the humans
- ALWAYS keep an eye on what the AI is doing to make sure it’s not being silly
The unit test instructions are very important, as building a library of these allows the AI to test the code with each change. Also, the human developer needs to stay awake and be comfortable correcting the AI when its implementation is poorly considered, as it happens a lot.
Many of these points have evolved from our work in the last 6 months and have now become standard for PIA. We actually add most to Cursor settings under rules configuration so they apply to all requests. From these instructions, the AI can create a new application in about 10 minutes, set up an environment, generate code, run unit tests, fix bugs, and finally resolve on a version. There are always a few ‘tweaks’ after this initial prompt where the developer instructs and refines, but it’s usually possible to have a first working version in an hour or two.
The process remains highly technical and for optimal results requires an experienced software developer; it’s very much a human-in-the-loop process. However, stepping back, we are able to develop first working versions of applications in hours. A year ago, especially for a new and rapidly evolving standard like MCP, getting to this stage would have taken at least a few days.
AI-Assisted Software Development Might Not Be for Everyone, Yet
Recent advances in AI-assisted development have helped PIA’s work, but they may not yet be appropriate for other organizations. Even with recent developments to encourage more use of AI, security constraints at federal agencies may limit the use of external Large Language Models and AI-powered software development tools like Cursor.
Another area for concern is that AI-generated code can sometimes be of low quality and introduce technical debt making it harder to support. There is some evidence this is happening, but many of the studies are from 2024 and earlier and don’t account for the most recent advances. Our experience at PIA is that AI-generated code quality – especially when careful to include human-in-the-loop standards and testing – has improved dramatically in the last year. That said, if an organization has an extensive team of experienced developers, the process will perhaps remain more human (assuming they aren’t all quietly using AI for their work already 🙂).
In The Far Distant AI Future … Like, Maybe, Next Year
Regarding the concerns about code quality, there is something else worth considering …
Tomorrow’s AI may be able to fix any issues generated by today’s AI.
Less than a year ago, much of what is described above wasn’t feasible. So, given the accelerating capabilities of AI in software development, what will next year bring?
We would love to hear from other organizations about their AI development process. Feel free to send us a note at [email protected]