AI Is a Magnifier
Smart in, smarter out. The simple formula for getting AI to actually be useful.
There is so much noise about AI right now. Half the feeds I open are people insisting it changes everything. The other half are people insisting it is hype that will pass. Both are partially right, and both are missing the part that actually matters: how you use it.
I get the same question, in different shapes, from people who have tried these tools and walked away frustrated.
“Why does mine come out generic?”
The answer is almost never the AI. It is almost always the prompt.
I have a line I keep coming back to, and it is the only one I want you to remember from this article: AI is a magnifier. Smart in, smarter out. Sloppy in, sloppier out. (That is a sharper version of my original line, which was: if you are smart, you will be smarter; if you are dumb, you will be dumber.) The model is not the bottleneck. The prompt is.
This is true whether you are using Claude, ChatGPT, Gemini, Copilot, or whatever new tool will be on the front page next week. The interface changes. The logos change. The pricing changes. The fundamental skill of asking the tool a question worth answering does not.
I have heard the same complaint from people in completely different fields. A doctor who tried it once and gave up because “it just told me obvious things.” A real estate agent who said it kept producing “generic listing copy that sounds like every other listing.” A small business owner who said it was “fine for grammar but useless for anything real.”
In all three cases, I asked to see the prompt. In all three cases, the prompt was one sentence long.
Learning a new tool requires an investment of time. A small one, but real. We are now conditioned for instant gratification, which in my opinion is far more damaging than what AI will do.
You cannot ask a one-sentence question and expect a thoughtful, specific, useful answer. You would not accept that from a junior team member. You would not accept it from a contractor. You should not accept it from yourself when you are the one writing the prompt.
The Simple Method
The framework I use and teach is four letters. RCTF.
That stands for Role, Context, Task, Format. Every great prompt has all four. Skip one and the output drifts. Hit all four and the output stops looking like AI and starts looking like a deliverable.
R, Role
Tell the AI who it is supposed to be.
Not “you are a helpful assistant.” That is the default, and the default is mediocre. Tell it the actual role you want it to inhabit.
“Act as a senior tax advisor with twenty years of experience explaining complex tax issues to a client who does not understand taxes.”
“Act as a real estate agent writing a listing for a buyer, not a window-shopper.”
“Act as a financial advisor preparing for a difficult conversation with a retiree who lost money in the market.”
The role does two things at once. It activates a body of knowledge. It also sets a tone. A “senior tax advisor” sounds different from a “tax accountant.” A “patient teacher” sounds different from a “subject-matter expert.” Pick the role that matches what you actually need on the other side of the prompt.
C, Context
This is the part most people skip, and it is the part that matters most.
Give the AI the background. The situation. The constraints. The audience. The thing that is true about your specific case is not true about a generic case. Remember, though, do not use any PII about the client.
“My client is a W-2 employee earning around $180,000. They had a major life event this year, the sale of a primary home. They also had capital gains of $50,000 from various stock sales. They had no estimated payments because last year they got a refund. They are confused about why they owe.”
You can already feel the difference between that and a one-sentence “client wants to know why they owe.”
Context is where these tools turn from a search engine into something actually useful for your specific situation.
T, Task
State the verb. Be specific.
Draft. Summarize. Compare. Analyze. Outline. Calculate. Critique.
The reason this matters is that AI tools default to “explain.” Ask a question, get an explanation. But explanations are usually not what you actually need. You need a draft. You need a comparison. You need a checklist. You need an outline. Tell it which one.
Pair the verb with a quantity. “Draft a 200-word reply.” “Outline a 5-section memo.” “Compare three options in a table.” Word counts and section counts are the cheapest way to make the output usable on the first pass.
F, Format
Tell the AI what shape the output should take.
A memo. An email. A bulleted list. A table. A one-page handout. A client-friendly explanation that avoids jargon. A table with three columns.
Format is the part where the output stops looking like AI and starts looking like the thing you need to send. You can paste it. You can print it. You can drop it into your client portal. The format is not an afterthought. It is the difference between “this is interesting” and “I am sending this.”
A Real Before-and-After
Here is the prompt most people write:
> “Write me an email explaining why my client owes more this year.”
Here is the same prompt with RCTF:
> “Act as a senior tax advisor explaining a tax outcome to a client. [Role] My client is a W-2 employee with an approximate income of $150,000 to $200,000. They asked why they owe more this year than expected. They had a major life event, the sale of a primary home, and no prior-year refund. They also had capital gains of $50,000 from various stock sales. [Context] Draft a 200-word reply in plain English. [Task] No jargon. Include one real-world analogy a non-accountant would understand. Tone: warm, expert, never condescending. End with ‘happy to chat more if helpful. Please book a meeting using the link in my email signature.’ [Format]”
The first prompt gets you a generic email. The second one gets you something close to what you would have written yourself, in roughly the time it takes to make a cup of coffee.
The difference is not the tool. It is the prompt.
The Moment That Always Lands
When I walk people through this, the moment that always lands is when someone tries an old prompt, then rewrites it with RCTF, then watches the second output. Their face changes. They get quiet. Then they say something like, “I have been blaming the wrong thing.”
That happens almost every time. Without exception.
People assume that getting better output means picking a smarter model, or upgrading to a paid plan, or learning some advanced technique they read about on Reddit. None of that is wrong, exactly. But all of it is downstream of the prompt.
A great prompt on a free model beats a sloppy prompt on the most expensive model in the world. Every single time.
Why This Is
Remember the small business owner I mentioned at the top, the one who said AI was useless for anything real? A few weeks after that conversation, she sent me a one-line follow-up. “I tried RCTF on my social media posts. They sound like me again.”
Same tool. Same business. Different prompt.
RCTF works whether you are drafting a contract, a recipe, a newsletter, or a toast. It does not care what you do for a living. It cares whether you respect the tool enough to use it well.
The people who get the most out of these tools are not the most technical people. They are the ones who treat the prompt like a brief to a junior associate. That is not a technology skill. It is a thinking skill, and, like any thinking skill, it improves with practice.
It also explains why some of the loudest skeptics are people who tried these tools once, got a generic answer, and concluded the technology was overhyped. They are not wrong about what they got. They are wrong about why they got it.
The Boundaries
A reminder, because the boundaries matter as much as the prompt.
If you are a tax professional, anonymize before you paste. No real names. No SSNs or EINs. No real dollar amounts. IRC §7216 and §6713 do not care which AI plan you are on, nor whether you toggled training off. Plan tier does not override federal law.
If you are not a tax professional, the same logic applies in every other context. Do not put client data, patient data, financial data, or anything else you would not put on a public website into an AI tool unless your firm has explicitly approved it. The chat window feels private, but that does not make it private.
And in every case, verify before you send. These tools can hallucinate citations, dates, and authority. The output looks confident even when it is wrong. We all know that being confidently wrong is still wrong. If you need an example of what this looks like, I can point you to some Facebook groups. You sign off on what reaches the client or the reader. Not the AI.
The Loop Back
The line I keep coming back to.
AI is a magnifier. Smart in, smarter out. Sloppy in, sloppier out.
The prompt is the part that decides which of those two you get. Not the model. Not the plan. Not the company building the tool. The prompt.
If you have been frustrated with AI tools, try this. Take the worst prompt you wrote this week, the one where you got a garbage answer and gave up. Rewrite it with Role, Context, Task, and Format. See what comes back. I would bet a coffee (a good coffee) that the difference is bigger than you expected.
Reply, comment, or email me. I read everything and do my best to respond.
Me? I rewrite my own prompts more than I would like to admit. Every time I do, the output gets better. Every time I do not, I have only myself to blame.
What is the first prompt you are going to rewrite this week?
For paid subscribers
If you are a tax professional, the companion to this article is a 12-prompt library structured around RCTF and ready to copy, customize, anonymize, and use. It covers client communications, IRS notice response templates, year-end planning letters by income segment, newsletter explainers, web copy, two interactive calculator specs, and an SOP-via-interview prompt that gets the process out of your head and into a document.
Paid Josh & Taxes subscribers can grab the PDF below.




