Builders gonna build Pt. 2
So. Much. To. Learn.
Much later than hoped, I want to share what’s next, what I’m looking forward to, and staring into the unknown.
What’s next:
I sum it up to two things, what does it take to build the best agentic applications and systems and how do teams work in an AI-native way. I’ll spend more time here talking about the latter, the former in separate posts to come.
On building in an AI-native way:
I believe teams can try to build agentic products in the conventional pre-LLM ways of working, but will ultimately fail at getting the best outcome. Deterministic planning processes and rituals for a product that is not deterministic but stochastic in output requires rapid experimentation, leaning into the discomfort of less predictability. Alternatively, not advocating for generating outputs for products that don’t improve on your learnings, that doesn’t become part of a loop for your product development.
Don’t get me wrong, building random things quickly is entertaining, but not fruitful. Strong definition is required, but where, when, and how? I’ve been obsessively reading and experimenting for the last 6 months - what does that transition look like. I grew up in the previous regime of tech, engineering timelines took longer than product planning, product spent time on customer research and all that jazz, made some concerted bets, a bunch (and I mean a TON) of ceremonies and rituals later, that gets into the roadmaps, epics, stories, sprints, and something happens. We thought it was fast when we were agile versus waterfall. Wow, we were so slow. And while disciplined product decisions that weren’t too hasty were rewarded with (at least a perception) of predictability, but presumed building had significant costs, and even experimentation required looking at your roadmap allocations.
That doesn’t make sense anymore. Experimenting is fast and requires thinking through building in part, you can react faster and better with inputs to react to enabling you to iterate on aspects of what you’re building without staring at a blank slate. See Anthropic - Product Management on the AI Exponential and How OpenAI's Codex team Works. Lots of inspiration to be had out there (see the build-in-public feeds of X/LinkedIn), the perspectives shared in each of those links, I wouldn’t say go copy their process and way of doing it verbatim. The important part is what’s changing, how you execute that (or not and why), there’s always infinite ways to do it.
I quickly realized - I needed to figure out how to enable exponentially faster async coordination from disparate semi-autonomous teams working on discrete parts of the platform, at a product definition level, not at the codebase/repos. Also, while removing the traditional approach to coordination through long complicated ceremonies and rituals. Enabling individual contributors to, well, you know, CONTRIBUTE. Everyone needs transparent context to what the oh-so-always-elusive-roadmap that is actually current and see and understand that progress. No more roadmap tools with all the rectangles and Gantt chart-esque vibes. Hello roadmap via Git or wherever you can keep a simple uncomplicated, but up-to-date, version controlled living repo of what the heck you’re building from a product perspective.
Something where you can see WIP thinking via branches, interdependencies and agreement or differences in what is WIP. It turns out, it’s not an insignificant problem statement to solve. Working in an AI-native way look , is not merely giving employees ChatGPT or Claude licenses. It requires a mindset shift (easier said than done, if you’ve worked in a large org with lots of teams….). I hear/see a lot of talk about measuring engineering productivity, code that gets to production, everyday you wake up to “so and so….10MLOC” and then “100 billion tokens in an hour or whatever”, but I don’t see enough or as much discussion on the insanity and pandemonium this is already creating for the other players on the team, as frequently. There’s posts like Elena’s Confessions of a Millennial in Tech (huge fan of her work/newsletter btw) that get at what a lot of people (myself included lol) are or could be feeling a version of in the midst of all of this. I imagine there’s a version of this for so many folks from various backgrounds. Rightly/wrongly - tech companies for a generation were built around the engineering timelines - what happens when that changes and it’s not a universal truth anymore. Companies are adopting much slower and less ambitious than the wishful-thinking of their LinkedIn posts. Individuals can change way faster than companies, and we’re seeing what it means to build and work in tech evolve in a rapid way that I find extremely exciting and unsettling on bad days.
When there’s insanity and pandemonium on the menu, there’s a lot of opportunity to make it better. I can’t say the PMs who worked for me in the past looked forward to product review.
What I’m looking forward to:
It actually took practically a month to follow up Pt. 1, this is probably more accurately called “what I’m enjoying and embracing” in the new role thoughts. I’ve always found role changes could be VERY different, or literally ZERO difference from one to the next. I and many others got it all the time, especially in large companies, “come join this team, lead that thing, do this new thing, it’ll be SO DIFFERENT, SO EXCITING”, inevitably a non-zero chance it’ll be exactly like your other role, but with a more complicated and complex internal title that sounds super cool? (Or not…). Best to ignore the titles, and just make the most of a new role to give yourself license to shed some old things, learn some new things, revisit some missed things, because… WHY NOT!
What I’ve enjoyed in making the move into this role is forcing myself, by personal preference and necessity to look much broader and wider beyond security and compliance problem statements. I’ve always had eclectic interests across a myriad number of seemingly unrelated things, the role change just gave me more license (at least perception in my head) to take that up 10 levels. Spending a healthy amount of time consuming and applying ideas and concepts I read in the morning and hack together and run through them that same day, iterate some more, toss it out, keep some of it, move on and do it again, share with friends and colleagues, hear and see what they’re doing - on repeat. Continuously, I’m in a loop (agent speak??) to stuff my brain with seemingly useful context, reason about it, check the output, dump half (TOKEN EFFICIENCY!), get more efficient, do it all over again.
If agents can improve, why can’t we?
The unknown:
The world didn’t literally change when I changed roles, but my perception of the world changed, as a full-time security leader, I have a belief about what I can expect my days to be like, the problems I face, and things I have to resolve, etc. Changing roles, I’ve come to face that whatever my beliefs were, I was probably staring into the unknown and uncertainty of the future before as I am today. In a different role, I guess, it’s more apparent to me , and for sure a bit unsettling. I’ll be honest, at the same time though, I love it. being reminded that the future is yet to be written, especially in a time of endless statements about what is or isn’t inevitable, it’s much more fun to embrace the reality of uncertainty.
There are problems today, there will be problems tomorrow, so I’m ‘staring into the unknown’ all-day/everyday and not hating it, kinda loving it.
Some things I enjoyed reading lately, in no particular order:
Matt Sheehan’s - China is getting worried about AI & jobs - used to cover/follow US-China cyber relations professionally in past roles, always interesting to get a long read on the matter. Nothing like some tea on the Ministry of Some Super Important thing fighting with the Administration of Really Important Matters.
Jasmine Sun’s - AI populism’s warning shots - tech is better served alongside a healthy serving of policy instead of coursed too far apart, otherwise - the disconnect will be apparent, like a poorly executed menu at a restaurant. Having spent some formative years in DC working in tech policy as a technologist in a policy-world in the 2010s, I harbor many strong feelings on the subject. Engagement doesn’t lead to perfect outcomes, but the outcomes I believe are net-positive to not engaging at all.
Delia Cai’s - How to tell the difference between T magazine and NYT Magazine - in case you weren’t wondering, T magazine for the pictures, NYT Mag for the words.
Cassie Kozyrkov’s - The World’s Most Powerful AI Is Here, But It’s Not For You - collection of links she has in this post pretty much feel representative of a week-in-the-day of working with/in/around all the AI-things!


