Case Study in Agentic Work
ParcelReport.ca solves a real problem, and it was built in days with an agent swarm.

Coding agents supercharged me to solve a real problem: turning 40 disjoint API’s into a single unified searchable dataset. Allow me to share my advice and strategies.
The Problem
Realtors, investors, developers, planners, insurers and lenders don’t have the tools they need.
Part of their job is to look up parcel data across the province. Sometimes one parcel, sometimes a whole neighborhood. They could be looking for assessment values, zoning codes, lot sizes, dates of construction, etc.
Looking up those data points is very cumbersome! Every municipality has a data portal and they are all wildly different. Some have multiple API’s with subsets of the data, some don’t have an API at all. Some are fast, some are slow. Very few allow bulk fetching, for analytics or research.
Wrangling API’s is something I’ve done in my career. It’s not glamorous work, but it could put a tiny dent in the cost of housing in my province! However…
It would be a huge amount of time for one person.
Enter: Agentic Coding
“Agentic”: When AI adapts while solving a task, not needing a fully defined path.
Two years ago, I would estimate this project at 6 weeks. Let me walk you through exactly how I did it in 6 days by maximizing how much of the work is done by AI agents.
Monday: Fact Gathering
One of the main time saves with AI is in fact gathering. This is also generally something the “not quite agentic” models are better suited for, like ChatGPT in Research mode. In this mode, ChatGPT asks clarifying questions, and my recommendation is to keep the initial search open and wide, then follow up with normal ChatGPT to extract and summarize results.
id: calgary_assessments
- https://data.calgary.ca/resource/...json
- Open Government Licence \u2013 City of Calgary
id: reddeer_parcels
- https://arcgis.reddeer.ca/.../OpenData/BASE_Parcels_inservice_PUBLIC
- Open Government Licence \u2013 City of Red Deer
id: reddeer_zoning
- https://arcgis.reddeer.ca/arcgis/.../0
- Open Government Licence \u2013 City of Red Deer
... (30+ more)
After about 3 hours of agent work (and an hour or so of directing and checking), I was able to get a verified compilation of all the API’s available, confirm this project is viable, and plan the solution.
Main Takeaway: Licensing attributions is a great example of when to manually verify all AI claims, sometimes even with a lawyer.
Midweek: Data Pipelines
All these sources of data needed fetching and normalizing into one shape, which is a data pipeline. Data pipelines have many perfect tasks for an agent swarm to accomplish, but only if I can really rest assured the work will be correct, otherwise I need to know the details as well as the AI does.
“Agent Swarm”: Running several agents at once on different tasks.
Breaking down the process into core bits of functionality to build lets us go through each one, considering if it can be suitable for an agent. I strongly encourage this exercise whenever you have a lot of tasks to do.
Fnctionality | Considerations |
---|---|
Download data and store raw output | Ideal agent task! The context is a URL, and we just want raw data. |
Learn how outputs can be joined | Poor agent task. The strategy here is to have AI propose methods of joining and review them yourself. |
Implement the join and normalize | Ideal agent task! We have concrete input and a fixed format for output. |
Merge the pipelines and run checks | Very poor agent task. Generally, AI should not decide which guardrails are needed. |
I spent about 5 hours across 4 days, while the agents put in about 50 hours. Some of my involvement was in the strategy setting in the 2nd row, then deciding the guardrails for the final data. I used these checks: statistical anomalies; reasonable bounds checks; about 100 manually researched records; manually allow-listed outliers.
In fact, these checks did catch a few errors. The coordinate transformations were wrong, which bounding box and distrbution checks caught. Also, assessments were wild on some tiny properties, which anomaly detection caught, and I realized apartment units get the assessment of the whole building. Testing becomes more important you give the agents more responsibility.
Main Takeaway: Never rely on AI to catch it’s own mistakes, it’s impossible. Try to be a good second opinion for the agent when it matters.
Friday: Distribution
Finally, we come to distribution.
The live search bar, the results page, and pulling the data into a database were the 3 main features to be built for distribution. These are great features for AI to implement with a very standard and recommended approach:
- Have the agent research and plan a solution to a concise problem.
- Use your human intuition and experience to refine the plan.
- Pay particular attention to clear acceptance criteria.
- Let it run, grab a coffee (or spin up a second agent).
- Confirm acceptance criteria yourself, even if it’s automated.
This general workflow lets an agent be both productive and correct.
Main Takeaway: What separates Agentic from Vibe Coding is good human intuition and involvement at the start and end.
Results
This project was a great chance to work through agents while also providing a useful service. People often make wild claims of super productivity with AI.. and in reality, it is occasionally true. Maximizing how often agents can work for you is a muscle you can, and should, build.
Hiring for the AI-first era?
Get notified about new posts on identifying agentic talent, measuring and improving AI-native proficiency, and evolving your hiring process.
No spam. Unsubscribe anytime.
Hiring engineers? Get early access to AI-native assessments