How to make a Google Ads AI agent
Scale paid marketing faster with AI
You're in—expect an email shortly.
An AI agent for Google Ads does three things: monitors your campaigns and surfaces what matters, helps you plan what to do next, and executes approved changes on your behalf. How well it does any of those three things depends almost entirely on what data and context you give it.
The attribution foundation
Before any agent logic, you need cross-channel attribution data you control. Google attributes conversions using its own windows and models, designed to maximize credit for Google. Meta does the same for Meta. Every platform overcounts its own contribution. Your agent should reason from a unified attribution model that measures all channels the same way.
The setup: every incoming visitor is parsed and identified by traffic source, channel, and campaign. Your attribution model (first touch, last touch, or multi-touch) assigns credit to each touchpoint, deduplicated across both paid and organic channels. The key is having campaign-level attribution on all revenue movement metrics: new customers, expansion, churn, contraction, reactivation. A tool like Roadway handles this out of the box, or you build it internally. When you can see which campaigns actually drive revenue and which drive customers who churn, your agent is working from real signal instead of platform vanity metrics.
Without cross-channel attribution, your agent optimizes for whatever proxy metric you have given Google, and those proxies are not your business goal.
Goal, funnel, and guardrail metrics
These three things go into every agent run as configuration:
Goal metric. What the agent is trying to move. One number. Paid customers, revenue, qualified pipeline - pick the metric that actually represents success for your business and be specific about the target.
Funnel metrics. The chain of events between a click and the goal: CTR, landing page CVR, trial signup, activation, paid conversion. The agent needs to see all of them so it can localize problems. A drop in goal metric conversions could be a targeting problem, a landing page problem, or an activation problem. Funnel metrics tell it which.
Guardrail metrics. The constraints the agent operates within: max CPL before a campaign gets flagged, minimum conversion volume before structural changes are made, impression share floor on brand terms. These prevent the agent from making locally correct but globally wrong decisions.
Memory
A stateless agent is nearly useless. Each run, pass in a structured log of: what changes have been made, when, and what happened afterward. What experiments are currently running. What has been tried and shut down. What seasonal patterns look like for this account.
This context is what stops the agent from recommending things that already failed, misreading intentional budget reductions as anomalies, or ignoring the fact that CPL always spikes in January.
Store it as a simple structured document, update it after each run, and pass it in as context every time.
Tools and skills
Your agent needs two types of inputs: tools (API integrations that let it read and write data) and skills (markdown files that give it context and decision-making frameworks).
Tools (APIs):
- Google Ads API - campaign, keyword, ad group, and budget data. OAuth2 with a developer token, client credentials, and MCC if across multiple accounts. For reads: GAQL queries against
search_term_view,keyword_view,campaign,ad_group_criterion,campaign_budget. For writes: mutate operations on campaigns, ad groups, keywords, and budgets. Write access requiresStandard Accessdeveloper token level (not Basic). Start read-only, add write access after validating monitoring output - Attribution / data warehouse - your cross-channel attribution data, joined to revenue. This is the source of truth for all performance evaluation
- CRM - customer records, deal stages, revenue events tied to acquisition source
Skills (markdown files):
- Account strategy - business context, ICP definition, what a good customer looks like, which products or plans matter most
- Keyword strategy - which keyword clusters to prioritize, which to avoid, brand vs. non-brand rules, match type preferences
- Budget allocation rules - how budget should move between campaigns, minimum thresholds before reallocating, seasonal adjustments
- Negative keyword management - existing negative keyword lists, rules for when to add new negatives, query waste thresholds
- Bid management rules - when to use manual vs. automated bidding, Target CPA/ROAS targets by campaign, re-learning period handling
- Competitive context - key competitors, their positioning, auction overlap patterns
The three levels
Monitoring. The agent pulls performance data on a schedule, joins it with your cross-channel attribution data, checks all funnel metrics against targets, and surfaces ranked deviations. Output is a short prioritized list of what to look at, not a full account report.
Planning. Given what the monitoring layer found, the agent reasons about what to do. Should budget shift? Is a keyword cluster worth expanding? Is this a targeting problem or a landing page problem? This is a conversation: the agent proposes, you push back with context it does not have, and the plan sharpens. The history in its memory is what makes proposals accurate rather than generic.
Action. Approved changes execute via the Google Ads API using mutate operations: keyword pauses (ad_group_criterion status change), bid adjustments (ad_group_criterion CPC bid), budget shifts (campaign_budget amount), negative keyword additions (campaign_criterion or shared set), new keyword additions from high-performing search terms. Write operations require a Standard Access developer token and appropriate OAuth scopes. The agent generates a change manifest with its reasoning. You approve line by line. It executes. Keep the human in the loop on execution.
How to set it up in Roadway
- Create a new Coworker
- Filter for the channel
- Choose your goal metric (this is what your agent will optimize for)
- Choose the funnel metrics that lead to your goal metric
- Choose your guardrail metrics and define their limits
- Choose your refresh schedule
- Publish
Work with AI Coworker to plan and execute campaigns. Reach out to us if you need any help - happy building: contact@roadwayai.com
FAQ
Can an AI agent fully manage Google Ads without human oversight?
No. An AI agent handles the data-heavy work: pulling performance data, detecting anomalies, surfacing waste, and proposing changes. But there is always context it does not have. A product launch delay, a sales team capacity constraint, a competitor doing something unusual. The agent proposes, you approve. The human in the loop is what keeps execution grounded in business reality.
How is a Google Ads AI agent different from Google’s automated bidding?
Google’s automated bidding (Target CPA, Target ROAS, Maximize Conversions) optimizes bids within Google’s ecosystem using Google’s conversion data. An AI agent operates at a higher level: it decides whether to use automated bidding at all, evaluates campaign performance against your own cross-channel attribution data, manages budgets across campaigns, handles keyword strategy, and compares Google’s actual contribution to revenue against other channels. The agent makes the strategic decisions. Google’s bidding handles the tactical bid auction.
What is the minimum budget to make a Google Ads AI agent worthwhile?
There is no hard minimum, but the agent needs enough conversion volume to detect patterns. If a campaign gets fewer than 30 conversions per month, there is not enough signal for the agent (or Google’s bidding algorithms) to optimize reliably. Accounts spending enough to generate consistent conversion data across multiple campaigns get the most value from an agent.
How long does it take before an AI agent starts making useful recommendations?
The monitoring layer produces useful output on the first run if your attribution data is in place. It will surface query-level waste, funnel bottlenecks, and guardrail violations immediately. Planning improves over time as the agent accumulates memory: what changes were made, what happened afterward, what seasonal patterns look like. Most teams see the planning layer sharpen meaningfully after four to six weeks of logged history.
What happens if the AI agent makes a bad recommendation?
The manifest-and-approval model exists for this reason. The agent does not execute changes autonomously. It generates a list of proposed changes with its reasoning for each one. You review each line item and approve or reject it. If a recommendation does not make sense given context the agent lacks, you reject it and that decision gets logged in the agent’s memory so it learns from the correction.