Skip to content

Conceptual Framework for Agentic Work

Primer

My goal in this article is provide a conceptual framework for agentic work whose results when compared to agents at work without the framework are

  • just as fast or faster to completion
  • higher quality on technical metrics
    • eg. for development: performance, cost, vulnerability counts, etc.
  • higher quality on human metrics
    • eg. for development: easiness to digest for review gating and learning, simplicity to maintain, clarity of function for roadmap planning

That was a very minimal list of measures and really each actor has their own circumstances and in turn relevant measures to optimize for. All the same, the ultimate goal for this framework is an improvement to the problem solving process and footprint of agentic work which I believe is helpful to what most people are trying use agentic work for, solving problems. In order to achieve something so abstract this framework will of course be decoupled from any implementation outside of choice examples hence why I am calling it a conceptual framework.

The Core Fundamentals

The core idea-set here is to roll up on a fast actor that is doing a bunch of things at once and say, “Hey, let’s hold on a moment. How about we break this stream of work down, organize all these broken up pieces we’re left with, package each of them up with some helpful material, put each of those packaged pieces through a process to tape them up, and leave records for each taped up package sent down the conveyor belt.”

Whether the actor is making a bunch of code and operations changes for a product or completing a bunch of personal digital life tasks in the vein of email, calendar, and so on or doing whatever group of work you're thinking of the fundamentals still hold.

  1. Breaking large problems down into organized smaller ones benefits at all times. Interaction with the work involved becomes easier, at both the current time (solving to produce and improve) and future time (reviewing to gate and learn).

  2. Having a precise vision for the future benefits at all times. Ensuring current work is warranted and fits the greater puzzle becomes easier as does minimizing the pain and increasing the potential of future work.

  3. Everyone (all humans + all AI) benefits from improved context. Problem on-boarding and decision making become easier when everything involved in the problem space is in easy reach.

  4. Everyone (all humans + all AI) benefits from standing on the shoulders of giants. Many tools and lessons are available today that have been battle tested out in the real world and cooked up for real use cases that make problem solving and avoidance easier.

  5. Everyone (all humans + all AI) benefits from a thriving ecosystem. Often problems are not original and someone else who may be more specialized and committed are already working on them and working with those entities can be the cheapest and most effective way to solve them.

It's important to remember here that all of this applies to more than just the agents making changes all of a sudden. They may have caused everyone to put more focus into the issue of "orchestrating quality work that can be easily digested over a long period of time" but the issue has been around for long before and applies to everyone solving problems, whether with AI or no. It is plenty possible and in fact quite wise to use lessons from human management in agent management and vise versa. Not all lessons but often all of anything without a selective hand ends up no good.

Breaking Task Streams Down

Providing Context

Providing Processes

Providing Records

Orchestrating Everything

Interesting Projects and Resources

Here are some interesting projects and resources I've saved in my passerby learning and curiosities regarding AI that I recommend. Grabbed all of this from my link stores when I took a crack at my blog article on a conceptual framework for agentic work yesterday.

Projects

https://gandalf.lakera.ai/gandalf-the-whitehttps://github.com/supermemoryai/supermemoryhttps://synetic.ai/https://gastown.dev/https://entire.io/https://github.com/driftlessaf

https://github.com/omnilingo/omnilingo

Learning Resources

Working With AI

BSidesNYC 0x05 - The Human-AI Handshake: A Framework to Build Trust and Unlock In... (Michael Raggi) - YouTube: https://www.youtube.com/watch?v=F6zPsGNkQnE

BSidesNYC 0x05 - Exploit Intelligence with Agentic AI: Patch What Matters (Dmitrijs Trizna) - YouTube: https://www.youtube.com/watch?v=6yrvKdsvn8s

PhD Thesis: Greybox Automatic Exploit Generation for Heap Overflows in Language Interpreters – Sean Heelan's Blog: https://sean.heelan.io/2020/11/18/phd-thesis-greybox-automatic-exploit-generation-for-heap-overflows-in-language-interpreters/

On the Coming Industrialisation of Exploit Generation with LLMs – Sean Heelan's Blog: https://sean.heelan.io/2026/01/18/on-the-coming-industrialisation-of-exploit-generation-with-llms/

Firefly | 2026 Predictions: AI Won't Kill IaC. It Will Make It Non-Negotiable: https://www.firefly.ai/blog/2026-predictions-ai-wont-kill-iac-it-will-make-it-non-negotiable

A Spec Driven Approach | LinkedIn: https://www.linkedin.com/pulse/spec-driven-approach-ryan-mcdonald-ge1bc/?trackingId=7oMNm6W6gRraDniefaM2Rw%3D%3D

My LLM codegen workflow atm | Harper Reed's Blog: https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/

AI changes people: https://tailscale.com/blog/ai-changes-developers

Cautions Regarding AI

Dijkstra on foolishness of Natural Language Programming - YouTube: https://www.youtube.com/watch?v=MwMaBg7JpDc

E.W.Dijkstra Archive: On the foolishness of "natural language programming". (EWD 667): https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html

We Abstracted Ourselves Into Ignorance | LinkedIn: https://www.linkedin.com/pulse/we-abstracted-ourselves-ignorance-kavitha-daula-p2s5e/

Every Reason Why I Hate AI and You Should Too: https://malwaretech.com/2025/08/every-reason-why-i-hate-ai.html

AI Coding Sucks | Prime Reacts - YouTube: https://www.youtube.com/watch?v=rgiuaJbyUyU

Do Companies Really Need Designers Anymore? - YouTube: https://www.youtube.com/watch?v=bOm9GDv96Tk

Working with AI with Cautions

AI in the SDLC Panel: Friend, Foe, or Both? | Assemble 2026 - YouTube: https://www.youtube.com/watch?v=Mj8t_yhkPpI

What is a Machine Identity? Understanding AI Access Control: https://www.permit.io/blog/what-is-a-machine-identity-ai-access-control

Machine Identity Security: Managing Risk, Delegation, and Cascading Trust: https://www.permit.io/blog/machine-identity-security-managing-risk-delegation-and-cascading-trust

IAM Strategy for CISOs: Securing Non-Human Identities: https://blog.gitguardian.com/role-of-cisos-iam-nhi/

Nuts and Bolts in AI

Why AI Systems Beat Bigger Models | Donny Greenberg | Ignite Talks - YouTube: https://www.youtube.com/watch?v=EZwwMD6rIRE

From cloud native to AI native: The role of context density - SiliconANGLE: https://siliconangle.com/2026/03/27/cloud-native-ai-native-role-context-density/

AI 101 by AI in Games - YouTube: https://www.youtube.com/playlist?list=PLokhY9fbx05eeUZCNUbelL-b0TyVizPjt

NLP Course | For You: https://lena-voita.github.io/nlp_course.html

Natural Language Processing is Fun! | by Adam Geitgey | Medium: https://medium.com/@ageitgey/natural-language-processing-is-fun-9a0bff37854e

What is Prompt Engineering? | prmpts.AI: https://prmpts.ai/blog/what-is-prompt-engineering

Working Methodology and Approach

Software Assurance & That Warm and Fuzzy Feeling - Dhole Moments: https://soatok.blog/2026/01/15/software-assurance-that-warm-and-fuzzy-feeling/

On Caring – Thinkst Thoughts: https://blog.thinkst.com/2025/06/on-caring.html

Intuition-Driven Offensive Security Program for Critical Risk Discovery: https://andywgrant.substack.com/p/intuition-driven-offensive-security