Remember a few weeks ago when I told you about testing Make.com and getting it to work? I was excited about building that four-stage content workflow—from a Telegram question to published articles across multiple platforms. I've spent a few days actually trying to build the thing, and I have some updates.
The short version: it's way harder than I expected, even after getting the basic automation working. But also way more fun than I anticipated.
Where We Left Off
When I last wrote about this, I had successfully connected a few services through Make.com and felt optimistic about the whole project. The vision was still intact: ask a question in Telegram, have AI agents handle research, writing, editing, and publishing across multiple platforms. Four stages, each calling the next, like a well-oiled machine.
What I didn't anticipate was how much complexity would emerge once I moved beyond the basic proof of concept. And what incredible opportunity there is!
The Documentation Problem
The biggest issue isn't the AI capabilities themselves—those work well when you can get them connected properly. It's that we're all working with tools that were released last month, updated last week, and will probably have new features by the time you finish reading this.
Here's what I've discovered building this thing: the instructions for these tools are often incomplete because everything is moving so fast. Services push out new configurations faster than anyone can document them. New features get added constantly. Video tutorials become partially obsolete within weeks because the interface has three new buttons and two moved menus.
My friend and collaborator showed me AI Automators, a content service that looked promising for some of the more complex integrations I wanted to build. The instructor clearly knew what he was doing, but trying to follow along was like trying to hit a moving target. The interface he was clicking through had new features since he recorded the video. Configuration options had been added. Authentication flows had new steps. Features he referenced were still there, but buried in different menus or had additional settings I needed to figure out.
The most frustrating part? The gap between "here's what this amazing new tool can do" and "here's exactly how to make it work in your specific situation with the current version." I'd watch a demo where someone seamlessly connected five different services, then spend hours trying to figure out why my webhook wasn't firing, why my API key was getting rejected, or why the data format that worked in the demo was causing errors in my setup.
The Technical Reality
The deeper I got into this, the more I realized I was essentially doing R&D work. Each service has its own rapidly evolving configuration options. New features get pushed out regularly. Error messages are cryptic because the tools are so new that comprehensive error handling hasn't been built yet. A workflow that works perfectly on Tuesday might need updates on Wednesday because new features you didn't know were coming.
Take what should be a basic task: taking research from one AI service and passing it to a writing service. You'd think this would be straightforward after getting Make.com working, but you're dealing with:
Configuration options that get new features regularly
Data formats that aren't standardized between services
Rate limits that vary by service and subscription tier
Authentication tokens that might have new security features
Services that add new capabilities almost daily
What starts as "connect A to B" becomes figuring out a system where half the components have new features every month and the other half will probably have completely different capabilities six months from now.
I found myself becoming an accidental expert in webhook configurations, JSON parsing, and troubleshooting integration quirks rather than focusing on the content strategy I actually wanted to improve. Which was actually kind of fun in a puzzle-solving way, but I was spending more time figuring out the automation than I would have spent just writing the articles manually. At least this time.
The Breakthrough Moment: Using AI to Build Better
Then I watched someone build an entire Make.com workflow with Claude's help. Instead of struggling through documentation and trying to figure out configurations alone, they were having a conversation with Claude about what they wanted to achieve. Claude helped them structure the workflow logic, suggested the right modules to use, and even helped debug when things weren't working properly.
That's when something clicked: I'd been thinking about this all wrong.
Instead of trying to become an expert in every service I wanted to connect, what if I used AI as my workflow architect? Modern AI systems understand how these services work together, can suggest better approaches when I hit roadblocks, and can help me troubleshoot problems without having to dig through scattered documentation.
This wasn't about replacing the complex workflow—it was about having a much smarter way to build it. To make this work, there are two crucial tips.
Make sure web search is enabled in your LLM. The most recent data will be on the web, not the LLM.
Ask the LLM to make reusable modules. It will then break up your idea into reusable scenarios that can run concurrently. Note, I had to ask for this. The LLM didn’t get it at the onset.
What I'm Learning About AI-Assisted Building
This experience is teaching me something important about building with AI tools. The real power isn't just in what the AI tools can do independently—it's in how they can help you build better systems with other tools.
For Small Operations: Instead of trying to master every tool you want to use, get good at collaborating with AI to help you build what you need. Any good AI assistant can help you understand configurations, suggest better approaches, and debug problems much faster than reading documentation.
For Larger Teams: Having AI help with workflow design and troubleshooting means your technical people can focus on bigger problems rather than getting stuck on configuration details.
For Anyone Building This Stuff: Use AI as your building partner, not just as the thing you're building with. Combining human creativity and AI problem-solving is way more powerful than either alone.
The New Approach
This shift from "struggling through documentation" to "AI-assisted building" has completely changed how I approach these projects. Instead of figuring out every detail myself, I'm having ongoing conversations with AI about what I want to achieve and how to get there.
The workflow I'm building now uses AI in two ways: as tools within the content pipeline, and as my building partner for creating the pipeline itself. When I hit a roadblock with Make, I describe the problem to my AI assistant and usually get a solution within minutes instead of spending hours hunting through forums.
This changes what skills you need. Instead of becoming an expert in every tool's documentation, you get good at describing problems clearly and working collaboratively with AI to solve them.
Where the Project Stands Now
I'm still working on the Telegram-to-publication workflow, and it's coming together much faster now.
The system still triggers from Telegram, still has AI agents for research, writing, editing, and publishing, but now I have an intelligent building partner helping me solve configuration problems, optimize data flow, and debug issues as they come up.
It's not technically simpler than my original vision, but it's much more buildable because I'm not doing it alone.
The Fun Part (Because This Is Fun)
Here's the thing nobody talks about: this is genuinely enjoyable work. Yes, it's frustrating when you spend three hours trying to figure out why a webhook suddenly stopped working. But it's also incredibly satisfying when you finally get that data flowing between services exactly how you want it.
There's something addictive about building these workflows. Every small breakthrough feels like solving a puzzle. When you finally get that research agent to pass clean data to your writing agent, and then see that writing agent produce something actually useful, it's a genuine "holy shit, this is working" moment.
And when does it work? When you ask a question in Telegram and thirty minutes later, you have a well-researched, well-written draft ready for editing? That's when you realize you've built something that genuinely changes how you work.
Having AI as a building partner makes it even more fun. Instead of banging your head against configuration problems alone, you're having a conversation with something that actually understands what you're trying to build and can suggest solutions you wouldn't have thought of.
The Broader Lesson (And Why It's Worth It)
Here's what I think this means for anyone working with AI tools right now: we're all early adopters working with rapidly evolving technology. The tools that seem most advanced are also the ones changing fastest. The workflows that look most impressive in demos might require the most maintenance.
But that's also what makes it exciting. We're not just using tools—we're helping figure out how these tools should work together. Every problem we solve, every integration we figure out, every workflow we build is contributing to the knowledge base that will make this easier for everyone who comes after us.
The most effective approach seems to be partnering with AI to help you build, rather than trying to master every tool independently. Let AI help you understand the complexity rather than trying to avoid it.
The Payoff
I'm definitely having fun with this. The vision of seamlessly connected AI agents handling complex workflows is absolutely achievable—we're just figuring out the best way to build them.
But here's the real payoff: if you figure this out, you've solved an important problem for yourself. You've built something that actually changes how you work. And in a world where everyone's talking about AI but few people are actually building complex workflows with it, that puts you way ahead of the curve.
Plus, you get to say "I built an AI workflow that starts with a Telegram message and ends with published articles" at parties. Which, let's be honest, is pretty cool. At least at the parties I want to be at.
This is the kind of challenge we work through in our Innovation Sprints. If you're wrestling with similar automation questions or want to prototype AI solutions that actually work with today's tools, reach out: dave@davemerwin.com