There’s a lot of talk about AI agents as if the future is one magic prompt.
That’s not how it’s working for us.
At CLCK, we’re using Hermes as the workspace layer for our AI agent work, and Arlo as the embedded assistant inside the business. Arlo helps across client delivery, sales operations, internal operations, project work and other parts of the business.
This article is the marketing-specific follow-up to our broader Hermes Agent for Business field note.
That broader piece explains why we started using Hermes, what changed as the work became more serious, and why structure matters. This one zooms in on our own marketing.
Not as a content vending machine. More like a working system around the marketing function: keeping track of ideas, deciding what matters next, briefing the right work, creating assets, checking performance, and bringing me in where human judgement actually matters.
This is not an official Hermes guide and we don’t represent Nous Research. It’s a practical look at what we’ve learnt using Arlo and Hermes inside CLCK.
The Hermes docs are the place to go for the product details around memory, skills, tools, scheduled jobs and messaging surfaces.
This is what the marketing workflow looks like in practice.
The basic setup: our marketing runs through channels
Most of our agent work happens inside Zulip. It could just as easily be Slack, Teams, Discord or another internal workspace. The important part is that the work happens where the team already communicates, not in a separate AI playground that someone has to remember to check.
For marketing, we have separate channels and topics for the main workstreams:
- SEO
- content marketing
- social media
- paid ads
- reporting and performance review
Each channel has its own purpose, but they all follow a similar pattern.
There’s a main strategy topic that keeps the big picture in mind: priorities, current plans, what’s already been done, what needs review, what data is available, and what should happen next.
Then the actual doing gets handed off to smaller, bounded work threads.
That split has become one of the most important parts of the whole setup.
One important rule: AI prepares, people approve
Operating rule
The safety line is simple: AI prepares the work, and people approve anything that leaves the business or changes a live system.
That includes external sends, public posts, live campaign changes, meaningful account or platform changes, and anything client-visible.
Arlo can gather context, draft, inspect, check and recommend inside clear permissions. It doesn’t get a blank cheque to publish, send, change campaigns or update client-visible material without review.
That rule keeps the workflow practical. The agent can remove a lot of coordination work, but the judgement stays with people.
Why we separate strategy from execution
If you try to run everything in one long AI thread, it gets messy quickly.
The same thread that was meant to think strategically ends up full of keyword exports, layout checks, metadata checks, image briefs, HubSpot drafts, build logs and every other bit of operational detail.
After a while, the thread loses the plot. It might still be doing tasks, but it becomes harder to see the plan.
So we use a parent-child pattern, but the plain English version is this: don’t make one AI conversation both the strategist and the labourer for too long.
The main strategy thread is responsible for judgement:
- What are we trying to achieve?
- What’s the highest-value opportunity?
- What’s already in progress?
- What needs a human review point?
- What should be delegated next?
The smaller work threads handle bounded execution:
- draft the article
- prepare the web page for review
- create the image brief
- check schema and metadata
- pull the ad performance report
- prepare social posts for review
- test the page preview
When a work thread finishes, it reports back in a compact format: what it did, what changed, what it verified, any risks or blockers, and what it recommends next.
That means the strategy thread keeps the direction clean, while the detailed work happens somewhere else.
The pattern deserves its own article, because it’s probably one of the biggest shifts in how we’re thinking about AI agents. For now, the simple version is this: separate the thinking lane from the doing lane, then bring the evidence back before making a decision.
SEO: finding gaps, briefing pages and building properly
SEO is one of the clearest examples.
We have a main SEO channel where the strategy thread looks at the website, current pages, keyword opportunities and performance data. Where connected sources allow, Arlo can help spot gaps where we don’t have a strong page for a search topic that matters to us.
For example, it might find a missing opportunity around a specific HubSpot migration topic. Maybe people are searching for moving from a particular platform into HubSpot, and we don’t have a dedicated page for that yet.
The strategy thread doesn’t just say, “write a blog post”. It breaks the work into smaller jobs.
One pass might do the keyword and search-intent research. That includes checking what people are actually searching, how competitive the topic looks, and whether it should be a blog post, a service page, a resource page or part of an existing page.
Another pass might create the content brief and draft the copy. That work needs to cover the search intent, but it also has to sound like us. We don’t want sterile SEO copy that technically covers the keyword but doesn’t help a real business owner make a decision.
A separate build-preview pass can take approved copy and turn it into a structured page for review. That includes layout, mobile checks, internal links and anything else needed before a person signs it off.
Then another QA pass can cover the background SEO details that are easy to miss because they’re not always visible on the page:
- page title
- meta description
- headings
- schema
- tags
- sitemap inclusion
- internal links
- social preview image
Each part reports back to the strategy thread, so the main lane knows where the work stands without drowning in build detail.
We’ve also connected Arlo to tools like DataForSEO. That was actually something Arlo recommended. Instead of signing up for a heavy SEO platform just to answer specific questions, it suggested using the API directly and paying for the specific data we need.
DataForSEO describes its pricing as pay-as-you-go, with customers paying for the individual services they consume.
That means we can ask practical questions like:
- What’s the search volume for this topic?
- Are there related searches we’re missing?
- Is there a lower-competition version of the same idea?
- Does this deserve its own page, or should it support an existing page?
The goal isn’t to publish content for the sake of publishing. It’s to find underexploited gaps, create the right page for them, and connect that page properly into the rest of the site.
Content marketing: turning rough ideas into finished assets
Our content marketing channel works in a similar way.
The strategy thread keeps track of the bigger content plan. It knows what’s in the idea bank, what’s being drafted, what needs my review, what’s ready to build or schedule, and what has already been published.
We use Trello as the visible board for that. If I have an idea, I can quickly add it to Trello or ask Arlo to add it for me. The board gives us a simple queue of things we might want to produce:
- blog articles
- LinkedIn posts
- emails
- resource pages
- lead magnets
- free guides
- supporting social assets
Trello isn’t the whole brain. It’s the shared status board. The full drafts live in Google Docs. The strategy and approvals happen in Zulip. The actual work happens in bounded threads.
That distinction matters because otherwise everything ends up buried in chat.
A typical content package might start with a rough note or voice memo from me. The strategy thread works out what the actual article should be, what angle is strongest, what audience it’s for, and what supporting assets should come from it.
Then a writing pass prepares the draft.
Once the draft is ready, I review it. I might say:
- the tone is too formal
- it’s using the same word too much
- the point is right but the structure needs work
- it needs to be more conversational
- the opening isn’t strong enough
- it needs a better example
- it needs links to related content
- the layout needs more images or diagrams
That feedback loop is important. The goal is not to remove the human from the process. The goal is to remove the empty coordination work, so the human review can focus on judgement.
After the article is approved, a build-preview pass can handle the page structure. That means preparing the blog page, checking the design, adding relevant internal links, suggesting supporting diagrams, and making sure the page is ready for a proper review.
The practical part is that corrections become reusable
One of the best things about using Hermes this way is that the corrections don’t just disappear into a chat thread.
When I give feedback that should apply again, Arlo can turn it into a checklist, a skill or a clearer working instruction for next time.
If I say, “don’t forget internal links”, that becomes part of the content build checklist.
If I say, “this sounds too formal”, the copywriting guidance gets tightened.
If I say, “we need more practical visuals”, the next article brief can include diagrams, flowcharts or suggested screenshots as a normal part of the process.
That’s where the compounding effect starts to show up.
To be clear, this doesn’t mean the underlying model is being trained on CLCK every time I leave a comment. It’s more practical than that. The system keeps better operating instructions around the work, so I don’t have to repeat the same correction every week.
A normal AI chat can be impressive for one task, but it often forgets the business context unless you keep re-explaining it. A working agent system should get better at the recurring patterns of the business. It should remember the review standards, the channel differences, the tone preferences, the approval points and the little operational details that usually live in someone’s head.
It still needs review. But it needs less repeated instruction each time.
Social media: turning one idea into the right posts for each channel
Once an article or resource is approved, it usually creates a set of supporting assets.
That might include:
- a few LinkedIn posts
- an email to the database
- company page posts
- a Facebook post
- shorter follow-up posts that point back to older related content
The social media workflow can work from the same Trello board and the same Google Doc. It can see what content package is being promoted and what’s already in motion.
Where tool access allows, Arlo can inspect what’s scheduled in HubSpot, draft new posts and prepare a queue across channels.
I still review it before anything goes live.
That review is partly about accuracy, but it’s also about the small human things:
- Are the images rendering properly?
- Does the preview look right?
- Does the post sound like something I’d actually say?
- Is it too obviously AI-written?
- Is the same phrasing being repeated across channels?
- Is the call to action too heavy for the context?
The point is not to blast the same post everywhere.
My personal LinkedIn should usually be more opinion-led: what we’re learning, what’s changing, where I think the industry is going, and what we’ve noticed from doing the work.
The company page can be more practical: how-to content, service updates, resources, examples and tactical guidance.
The same core idea can feed both, but the posts shouldn’t sound identical.
That’s one of the jobs Arlo can help with: adapting the asset for the channel, while keeping the underlying message consistent.
Paid ads: strategy stays high-level, build work gets prepared for review
We’re also using the same model for Facebook ads and Google ads.
Where API access allows, the agent can pull account data, inspect campaign settings, prepare draft copy, generate image briefs and make recommendations without me needing to log in and click around every time.
But again, the strategy stays separate from the build work.
The strategy thread should own the decisions:
- What are we testing?
- Who are we targeting?
- What’s the offer?
- What budget makes sense?
- What should we leave running long enough to learn from?
- What should be changed, paused or tested next?
The detailed work gets prepared separately for review.
Creating a campaign inside an ad platform can involve a lot of fields, naming conventions, image assets, copy variants, tracking settings and final checks. That’s exactly the sort of work that can bloat a strategy thread if you keep it all in one place.
So the strategy thread can brief one work pass to prepare the campaign structure, another to create image concepts, another to prepare copy variants, and another to do a final QA pass before review.
The strategy thread remains the campaign manager. The execution work is prepared for a person to approve.
That’s a pattern I think more businesses will use as APIs become more agent-friendly. The interface doesn’t have to be the ad platform itself. It can be the internal workspace where you already make decisions.
For us, that’s Zulip. For another business, it might be Slack, Teams or Discord.
Reporting: daily checks without the manual dashboard routine
Reporting is another place where this becomes immediately practical.
Each marketing lane can have a scheduled daily check that pulls the relevant performance data and posts a summary back into the right thread.
A Google Ads report might say:
- how much we spent yesterday
- how many clicks we received
- what converted
- which search terms came through
- what looks wasteful
- what should be tested next
For example, it might notice that we paid for clicks on a low-intent variation of a keyword and recommend adding a negative keyword.
Then I can respond in the thread:
“Agree, prepare that change for approval.”
Or:
“Leave it running for a few more days before changing anything.”
The same applies to Facebook ads, social performance, SEO movement and content performance.
That doesn’t mean the agent gets to do whatever it wants. The valuable part is that it turns reporting from a passive dashboard into an active review loop.
Instead of me remembering to log into five tools and interpret everything from scratch, the system brings the relevant summary, the recommendation and the decision point into the same place we already work.
What this changes for a small business
The biggest change is not that AI can write a blog post.
That helps, but it’s not the main thing.
The bigger change is that one person can manage a lot more marketing activity without having to personally carry every open loop.
There’s still human judgement involved. I still review the articles. I still check the posts. I still approve meaningful changes in ad accounts. I still look for wording that feels too polished, too generic or too obviously AI-shaped.
But I don’t have to manually coordinate every step.
The system can remember the queue, prepare the next draft, check the page, create the supporting assets, pull the report, surface the issue, and ask for the right decision at the right time.
That’s very different from just using ChatGPT to write copy.
It’s closer to having a lightweight marketing operations layer around the business.
What we’ve learnt so far
A few lessons are already clear.
1. Agents need structure more than freedom
A completely open-ended agent sounds powerful, but it usually creates messy work.
The practical setup is more constrained: clear channels, clear sources of truth, clear review points, clear permissions and clear return formats.
2. Keep strategy and implementation separate
If the same thread is doing the thinking, the drafting, the building, the testing and the reporting, it will eventually lose focus.
Separate the strategic judgement from the detailed work, then bring the result back with evidence.
3. Put the work somewhere visible
If everything lives in chat, it becomes hard to manage.
For us, Trello shows the queue and status. Google Docs holds the full drafts. HubSpot holds scheduled marketing assets. Zulip holds the strategy, approvals and reports.
Each tool has a job.
4. Human review is not a failure
The goal isn’t to make the agent magically perfect.
The goal is to make the first version good enough that the human can spend their time on judgement, not blank-page creation or admin.
If an article is 90% there, that helps. If a social queue is 90% there, that saves time. If a daily ad report saves me from logging into the platform just to find one obvious change, that matters.
5. The learning loop matters
The best systems improve when you correct them.
If you keep having to repeat the same feedback, the system isn’t really learning your business. Skills, checklists and reusable instructions are what turn one-off AI help into an operating capability.
6. APIs make this much more practical, but permissions still matter
A lot of marketing tools are becoming easier for agents to work with.
If the agent can read performance data, create drafts, check schedules and prepare changes through APIs, it can do far more than generate text.
But that only works if the permissions, approval points and review process are clear.
This is only one part of the business
Marketing is just one example of how we’re using Arlo and Hermes inside CLCK.
We’re using similar patterns across sales, operations, finance, client delivery and project work. I’ll write separately about those because each area has its own shape.
For now, the marketing example is a good place to start because it shows the range of work involved: strategy, content, SEO, creative, scheduling, ads, reporting and review.
The point isn’t that a small business can replace its marketing judgement with AI.
The point is that a small business can build a better operating system around the judgement it already has.
That’s what feels most promising to me.
Not one magic prompt.
A set of practical, reviewable workflows that help us keep moving.
And for a small team, that makes a real difference.
If you’re trying to work out where AI could safely help inside your marketing operations, start with one workflow, one source of truth and one clear approval point.
If you want help choosing that first workflow, apply for a strategy session.