
Why Your Dashboard Numbers Lie (And How to Fix Reports)
2026-02-15
Mobile-First Design: Why Your WordPress Theme Is Losing Mobile Traffic
2026-02-20I let AI run my email campaigns for 90 days with minimal human oversight. This was not a supervised experiment where I reviewed every draft before sending. I wrote the initial instructions, set up the automation, and intentionally limited my involvement to a one-hour weekly review. The AI handled subject line generation, email body copy, send time optimization, and A/B testing. My job was to look at the performance data once a week and make small adjustments to the instructions. I was nervous about this because email is the most personal marketing channel I manage, and I had spent three years building a list of 4,700 subscribers who expected a specific tone and voice from me. Handing that over to an algorithm felt like a risk.
How I Set It Up and Why It Almost Failed in Week Two
I connected ChatGPT to my email platform through a third-party API integration that cost $29 per month. The first step was writing detailed content briefs for each type of email we sent: welcome sequence for new subscribers, weekly newsletter for existing subscribers, promotional emails for product offers, and re-engagement emails for inactive subscribers. Each brief specified the target audience, the goal, the tone, the length range, and examples of past emails that had performed well. Writing the briefs took about four hours total. I thought this was enough preparation.
Week two was a disaster. The AI sent a promotional email for a product launch with a subject line that read “You Deserve This.” The open rate was 11 percent — about a third of our normal rate. The email body was full of generic marketing language like “revolutionary solution” and “transform your workflow.” Two people replied asking to be unsubscribed because the tone felt “salesy and fake.” I had to send a manual apology email to the list and offer a discount to salvage the launch. The mistake was that my content brief had not specified which words to avoid. After that incident, I added a list of 47 banned words to every content brief, including “revolutionary,” “game-changing,” “transformative,” “industry-leading,” and “best-in-class.” The AI never used those words again.
The Numbers That Surprised Me
Over the full 90 days, average open rates settled at 37 percent compared to my manual average of 38 percent — essentially the same. Click-through rates improved from 4.2 percent to 4.7 percent, a small but consistent gain. The biggest surprise was send time optimization. I had always sent emails at 10 AM on Tuesdays because that was when I had time in my schedule. The AI tested different send times across the week and found that for my specific audience, 2 PM on Thursdays produced 14 percent higher open rates and 22 percent higher click rates. I had been sending at suboptimal times for three years without knowing it because I never tested the assumption.
The subject line testing was another unexpected win. The AI generated ten subject lines per email, tested the top two against small segments, and sent the winner to the rest of the list. Over 90 days, this systematic approach improved subject line performance by about 12 percent compared to my manual approach. I was good at writing subject lines but I was not consistent — sometimes I rushed and wrote something mediocre. The AI was consistently decent, and consistency beat occasional brilliance over time. The time savings were dramatic: I went from spending about seven hours per week on email to about one hour. That hour was spent reviewing performance data, responding to personal replies from subscribers, and refining the content briefs based on what worked and what did not the previous week.
The Problems Nobody Talks About
There were problems that I did not anticipate. About 8 percent of the AI-generated emails had a slightly off tone that I caught in my weekly review but only because I was looking for it. A few slipped through when I was busy and those emails had engagement rates about 30 percent below average. The AI struggled with humor — any attempt at being funny landed flat or came across as inappropriate. The AI could not handle subscriber replies that asked specific questions about our products or services. Those needed human responses, and I had to check for them manually. The AI also had no awareness of external events. When a competitor launched a similar product during the test period, the AI continued sending its scheduled content as if nothing had happened. A human marketer would have adjusted the strategy. The AI could not detect or respond to competitive moves.
Would I Do It Again?
Yes, but with important changes to the approach. The ideal setup for me turned out to be AI handling about 70 percent of the work — drafts, testing, scheduling, optimization — while I handle the remaining 30 percent — final tone checks, strategic decisions, personal replies, and competitive awareness. The pure automation experiment taught me that AI can handle the routine work well but needs human judgment for the exceptions. I have continued using the system with this hybrid approach and the results have been consistently better than either fully manual or fully automated. The seven hours per week I saved have been reinvested into creating better content for the emails, which has improved overall performance further. The key insight is that AI should augment your marketing, not replace it. When you treat it as a partner rather than a replacement, the results can be surprisingly good.
What I Learned About AI and Brand Voice
One detail that I did not expect: the AI was actually better at maintaining a consistent tone than I was. I would sometimes write warm and friendly emails when I was in a good mood and more direct emails when I was busy or stressed. The AI produced the same tone every time because it followed the same instructions every time. Subscribers started commenting that the emails felt “more consistent” during the AI period, even though they did not know AI was involved. This made me realize that my own writing quality varied more than I thought. The AI’s consistency was a genuine benefit that I had not anticipated. The downside was that the AI could not match the warmth of my best manually written emails. The average quality went up, but the peak quality went down. Whether that trade-off is worth it depends on whether you value consistency or occasional brilliance more.
Related Articles
I Used AI to Write 100 Blog Posts — Here’s What Happened
Why Most AI Content Strategies Fail Within 3 Months




