Specialism
Project type
Insights
Tools

ChatGPT liked this house design so why would it fail planning permission

What begins as a harmless prompt may end in misplaced confidence because AI-generated house plans for planning permission must survive a series of rigorous real-world tests that no chatbot is yet to handle.

Date published: 11 March 2026
Last modified: 11 March 2026
12 minutes read
AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
Ask AI to summarise this article
ChatGPT
Claude
Grok
Perplexity

I want to make a confession.

I prepared a house design using Nano Banana Pro, based on a prompt I had developed separately with Claude, then pasted the result into ChatGPT and asked a question that sounded harmless: 

Does this design work?

Then, because I am human and therefore weak around flattery, I asked the second question: 

Do you like it?

ChatGPT liked it. Of course it did. I also checked with Microsoft Copilot and Gemini. They liked it too. 

One of the responses read as follows:

“Yes, the design looks very appealing. It has a clean, contemporary feel, with simple geometry, large glazed openings and a restrained material palette that gives it a calm, elegant character. The generous glazing makes the home feel bright and open, while the spiral staircase adds a distinctive, more luxurious touch. Overall, it feels modern, warm and well-composed.”

They also sounded certain. Not “maybe”, not “it depends”, not “check with your architect”, not “subject to planning permission”. 

For a moment, I felt like the design had been validated.

If you have spent any time using AI tools at all, you will recognise the trap: the tool that was built to be helpful will also be persuasive, and the tool that is persuasive will sometimes feel like an expert even when it is not.

This article is not a campaign against using AI in architectural design. It is about being honest about why AI-generated house plans, however compelling they appear, are not the same as designs capable of securing planning permission, satisfying building regulations, or actually being built.

That matters because people are no longer asking only whether AI can produce a compelling render. They are asking whether ChatGPT can design a house for planning permission, whether Claude can prepare planning application drawings, and whether a council will accept AI-generated plans by Nano Banana Pro as part of a valid submission. Those are much harder questions. They are also the ones this article is actually about.

AI-generated elevated view of a modern suburban house with a green roof, minimalist rear façade and large windows, positioned between neighbouring brick homes with conventional roof forms.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

What AI-generated house designs for planning applications actually look like

The house AI designed is a two-storey white render box over dark engineering brick. 

Full-height glazing across both floors. A sedum green roof. A spiral staircase you can see straight through the rear elevation. It sits between two traditional red-brick semis, which tells you everything about the problem.

It is exactly the kind of home that turns up in the "dream home" corners of architecture sites and under searches like "building a house on your own land": an infill plot, a tight budget that pretends to be flexible, and a brief that insists on "light", "space", and "a strong connection to the garden".

On the first pass, the design is intoxicatingly legible. 

Clean volume. Spiral stair as centrepiece. Sliding glass walls opening the ground floor straight onto the garden. Timber slats softening the render. A flat roof with a sedum layer dropped in like a magic spell. It is the sort of output that makes you think: maybe the hard part is over.

But a building is not a poem. A building is a negotiated set of commitments: to climate, to safety, to comfort, to law, to neighbours, to budgets, to material supply chains, to the physics of heat and water, and to the fact that human bodies need to get out quickly when something goes wrong.

None of this means AI is useless at the early stage. A well-prompted tool can help you test massing options, articulate a brief, or quickly explore what a certain material combination looks like. That is genuinely valuable. The problem starts when that exploration hardens into conviction and when the concept stops being a question and starts being a decision.

AI-generated rear view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

When praise starts hardening into judgement

What I did next is the part I am not proud of. 

I treated the bot’s approval as if it were a genuine design review. Slowly, I began to give fixed meaning to choices that should have remained provisional and open to challenge.

That flat-roofed form? “A clean contemporary composition.”

That full-height glazing? “A strong connection between inside and outside.”

That roof terrace with glass guarding? “A premium lifestyle feature.”

That dark-framed minimal palette? “Confident and elegant.”

That first-floor window, sitting three metres from the boundary? I didn't ask.

And that is exactly how it happens. Once the language sounds polished, it becomes dangerously easy to forget that none of it has yet been tested against context, policy, privacy, cost or buildability. What you have received is not a design review. It is a reaction. And the difference between the two is everything.

AI-generated suburban infill house design showing a two-storey contemporary form, green roof and expansive glazing, viewed in contrast to neighbouring pitched-roof properties.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

What the chatbot doesn't tell you

The danger is that early-stage design decisions set the direction for everything that follows, and AI encourages you to make those decisions too quickly, on too little information. 

In a scheme like this, the overall form affects not only appearance but thermal performance and build cost

The amount of glazing influences overheating, privacy and energy loss. The relationship between upper-floor windows, terraces and boundaries shapes whether neighbours may object. The roof form affects how naturally the house sits within the street. And the structural ambition of wide openings and crisp projecting elements may quickly turn a neat concept into an expensive exercise.

The chatbot does not slow you down to consider any of this. 

It does not tell you to stand outside the site and notice that the street is held together by pitched roofs, brick facades and a more settled domestic rhythm. It does not ask whether a flat-roofed, glass-heavy box would appear elegant in isolation but awkward in context, or point out that what feels refined in an image may read as overly assertive on a tight suburban plot.

Nor does it warn you where the real pressure points are likely to emerge. 

The issue may not be the part of the design you admire most, but the first-floor overlooking, the bulk against the boundary, the terrace guarding, the depth of the rear projection, or the way the house sits within the roofscape of the street. In other words, the design may begin to unravel not at the level of aesthetics, but at the level of acceptability.

The chatbot also does not tell you that the open, flowing interior suggested by the image will later be shaped by structure, fire safety, ventilation, insulation and the practical demands of building regulations. What looked effortless in the render may become compromised, contorted or simply unaffordable once it is forced to behave like a real building.

That friction - the pause, the doubt, the site visit, the policy check, the second opinion - matters more than people think.

But that is also where architecture becomes real. It is where a seductive concept is tested against the discipline required to turn it into a house that belongs to its setting, works for the people inside it and stands a realistic chance of securing planning permission. 

AI-generated exploded axonometric view of a contemporary two-storey house, showing the green roof, upper floor, internal layout and glazed ground floor.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

Can an AI-designed house secure planning permission?

The relentlessly positive tone of chatbots feels comforting for about three seconds. Then it begins to feel slightly alarming. Because AI approval is not the same as planning permission.

In England, there is no single test for whether a proposal works. Any design must survive several at once: national planning policy, the local authority's own policies, building regulations, and the design expectations that apply to the specific site. What sounds persuasive in the abstract may fall apart as soon as it meets the real demands of compliance, process and place.

Local planning decisions are rarely made on the terms AI finds easy to satisfy.

The images show exactly the kind of proposal a chatbot admires: crisp, minimal and magazine-ready. But what reads as confident design in isolation is always judged against what surrounds it. 

Here, a flat-roofed box with extensive glazing and a stripped-back palette sits awkwardly against a suburban setting defined by pitched roofs, brick houses and a more settled domestic rhythm. The upper-floor glazing and glass guarding raise concerns about overlooking and privacy. The box-like massing reads as too assertive for a tight suburban plot. These are not aesthetic objections. They are material planning considerations.

That does not mean contemporary design is impossible. In fact, we have secured planning permission for many contemporary schemes, from bespoke single dwelling houses on garden plots and in the Green Belt to infill developments and residential blocks in London and across the UK. 

But those planning permissions were not won by producing something that looked modern. They were secured by designing buildings that responded intelligently to their setting, articulated a clear design rationale, and aligned architectural ambition with sound planning judgement. 

That meant proper planning drawings prepared to a very high standard officers could assess, Design & Access Statements that explained the thinking behind the scheme, and in most cases direct negotiation with officers because planning permission is rarely handed over. It is worked for.

That is the test AI consistently fails, and it runs deeper than aesthetics. A chatbot can scan planning policy, summarise a design rationale, and produce something that reads like a considered response to context. But it cannot do any of that to the depth a planning officer will probe, or with the judgement a dedicated professional team brings to a specific site, a specific authority, and a specific set of recent decisions. 

The result is always the same. AI-generated designs do not fail planning permission because they look wrong. They fail because the thinking behind them was never deep enough to make them look right to the people who decide.

And that is the core of it. Visual coherence and real-world acceptability are not the same thing, and no amount of confident prose closes that gap.

AI-generated interior view of a contemporary open-plan house, showing large sliding doors to the garden, a sculptural spiral staircase, living area, dining space and kitchen.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

Can AI-generated house plans meet building regulations requirements?

Beyond the planning system, a house in England must also satisfy Approved Documents and a live regulatory timetable that governs how it is actually built. These are the statutory guides that form the backbone of building regulations in England, covering everything from fire escape and structural stability to energy performance, ventilation and accessibility.

So when AI says a design works, the obvious question is: under which rules, for which site, on what assumptions, and evidenced how? If it cannot be said, the approval means very little.

AI chatbots liked the generous glazing, the open-plan feel and the sculptural stair. That is exactly the problem. The design is being rewarded for the very things most likely to cause trouble once the image meets the real-world tests of overheating, escape, access, energy performance and planning policy.

Take Part O first.

The large areas of glazing may look elegant, but Part O is not interested in elegance. It asks whether glazing area, orientation, cross-ventilation and shading work together well enough to prevent overheating. What reads as light-filled in a render may become a compliance problem in summer.

Part B is no simpler.

The open-plan layout and feature stair help sell the image, but fire safety is about escape, separation and performance, not atmosphere. A dramatic stair is not automatically a compliant stair. A beautiful bedroom window is not automatically an escape window. From an AI-generated image alone, you simply do not know whether the openings are large enough, low enough, or arranged in a way that supports a safe escape strategy.

Parts M and L complete the picture.

Accessibility, thresholds, circulation widths and energy performance do not resolve themselves because a plan looks clean. These requirements interact with each other and with the decisions made under Parts O and B. That interaction is what a chatbot has no mechanism to test.

That testing has a formal home. A full plans application exists so building control can examine proper building regulations drawings before work begins. It is the point at which the features the chatbot praised are measured against actual requirements rather than aesthetic logic. Without it, those features are not strengths. They are unresolved risks.

And that is before construction even begins. Once work is under way, a registered building inspector (commonly referred to as a building control officer) will inspect the site at key stages, including the foundations, damp-proof course, structural works, insulation and drainage. Each inspection is not just a checklist exercise. It is a real-world exchange between the inspector and the people on site, grounded in presence, professional judgement and accountability.

Presumably ChatGPT will pop by at slab level to cast an eye over the foundations. Perhaps Gemini will check the roof structure on a wet Tuesday in February. The chatbot that approved your design will not be there. It was never going to be.

So where does that leave you?

Compliance is not a single gate at the end of design. It is a design material in its own right. Treat it as an afterthought and it will reshape your building anyway: late, expensively, and with far less grace than if it had been part of the thinking from the start.

AI-generated side view of a modern flat-roofed house with a planted roof, wide glazing and garden terrace in a suburban context of pitched-roof brick houses.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

How can AI help to save money on architect fees?

Many homeowners and self-builders are asking exactly that. If AI can produce a convincing house design in minutes, the logic goes, why pay architect fees at all?

It is worth understanding what those fees are actually for, because most people who ask that question are comparing the wrong things.

Architect fees are not primarily paying for drawings. They are paying for someone who has done this before on a similar site, with a similar authority, and who has the experience of finding the right balance between the constraints of the project and what the client actually wants to achieve - and who knows, from that experience, where an application is likely to run into trouble before it does.

And here is what that means in practice. That knowledge does not appear in an AI output. It is accumulated through years of submissions, refusals, and negotiations, and through the kind of site-specific judgement that only comes from standing on the plot and reading what surrounds it.

There is also the question of accountability. 

An architect carries professional indemnity insurance and is bound by a code of conduct. If the advice turns out to be wrong, or the drawings inadequate, there is a professional on record who bears responsibility. AI carries none of that. Whatever goes wrong, the liability stays with the person who submitted the application.

The real cost of skipping professional input tends to arrive later: in a planning refusal that requires a full redesign, in building control queries that expose compliance gaps, or in a contractor who cannot price from drawings that were never buildable. 

The maths is simple. Architect fees are not the expensive part of a project. The expensive part is what happens when you skip them.

AI-generated top-down aerial view of a contemporary house in a UK suburban setting, with a biodiverse flat roof, showing its footprint, garden layout and relationship to neighbouring brick homes and surrounding residential plots.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

Why AI approval feels so convincing

Chatbots like ChatGPT, Claude, Gemini and Microsoft Copilot, alongside image-generation tools such as Nano Banana Pro, are becoming part of the design workflow far faster than the profession has worked out how to use them well.

The problem starts when that workflow shifts from generating design material to generating judgement, because chatbots, in particular, are built to produce answers that read as if they belong. They complete patterns and default to fluent certainty. They can be useful, but they are also capable of producing incorrect or misleading output, sometimes with high confidence.

Even OpenAI has been blunt about this: “hallucinations”, plausible but false statements, remain a persistent problem for language models, in part because common training and evaluation incentives reward guessing over admitting uncertainty

There is a second, quieter issue: consistency. 

Even if a model is “mostly right”, you don’t get to choose which day it is mostly right. In safety-related contexts, researchers have pointed to volatility in analytical quality and the danger of assuming a model is stable simply because the interface looks stable. 

Architecture depends on repeatability, but AI does not

Architecture workflows, meanwhile, love repeatability. A template detail is valuable precisely because it behaves the same way tomorrow as it did yesterday. 

That is why we talk about “standard details”, “approved products”, and “robust specifications”. The built environment is full of learned routines because routines reduce risk. A chatbot’s output is not a routine; it is a one-off improvisation. 

This is also why cross-sector risk management frameworks have been leaning into governance, provenance, testing, and incident disclosure rather than treating generative AI as a plug-and-play productivity hack. 

The National Institute of Standards and Technology frames its Generative AI Profile as a companion resource to the AI Risk Management Framework, intended to help organisations incorporate trustworthiness considerations and manage risks that are novel to or exacerbated by generative AI. 

That same profile explicitly notes that repeated use of the same model may produce “algorithmic monocultures”, meaning correlated failures arising from many actors relying on the same system. In architecture, you can imagine the built equivalent: the same default plan logic, the same façade tropes, and the same blind spots replicated at scale.

The European Union approach comes at it from the regulatory angle. European Commission describes the AI Act as a risk-based framework designed to ensure trust, with transparency expectations and, for certain categories, obligations around oversight and accountability.

You don’t need to believe that a chatbot is legally “high risk” in your studio to take the message seriously: society is moving toward the idea that AI outputs must be traceable, supervised, and contestable when they touch safety, rights, and real-world consequences. Buildings touch all three. 

The real danger is false reassurance

In architecture, that design-for-helpfulness collides with a professional culture that already uses language to smooth uncertainty. 

We say “indicative”. We say “subject to coordination”. We say “to be confirmed”. We say “provisional”. A convincing paragraph can pass, temporarily, as competence.

The risk is not simply that AI chatbots will be wrong in a minor way. The risk is that it creates an illusion of review: a sense that someone independent has “checked” the logic of the plan. 

This is a classic pathway to over-reliance, and it is so common that the EU AI Act explicitly treats “automation bias”, the tendency to automatically rely or over-rely on AI outputs, as something that human oversight measures should address for high-risk systems.

Yes, architecture is not a medical device. But residential design is still a safety-critical activity in a banal, everyday way: it deals in escape routes, fall hazards, fire spread, structural stability, ventilation, overheating, and long-term durability. The consequences of a bad “yes” are not abstract. They are physical.

Research on large language models in safety-critical contexts has flagged a related issue: performance can be volatile, analytical quality inconsistent, and regulatory compliance assessment absent from existing benchmark landscapes, even while the systems appear impressive at surface-level language tasks.

The second reason AI “approval” feels convincing is that it is often aligned with what we already want. When you ask a chatbot whether the design is “good”, you are not asking a neutral question. You are asking for reassurance. And the technology is tuned, commercially and psychologically, to provide it. 

We should name that dynamic honestly: it is not just automation. It is authority laundering. 

AI takes a set of familiar stylistic cues, the white render, the black aluminium frames, the timber slats, the minimal roofline, and reflects them back as "good design". The trick is that, in planning and technical design, "good" is a moving target set by local context and hard constraints. 

A design might be coherent in the abstract while remaining entirely context-blind: ignoring the grain of the street, the roofscape, the materials palette, or the way a place feels to its users. 

And this is where AI reaches its limit: it may imitate what looks like good design in the abstract, but even if you feed it a local authority’s policies, supplementary planning documents and site constraints, it still cannot match the judgement, nuance and site-specific intelligence of a chartered architect, or the strategic planning judgement of an experienced planning consultant, in shaping a development that genuinely fits its context and has the strongest prospect of securing planning permission.

If you are wondering whether this is just a concern for amateurs, residential architects and the wider profession are clearly anxious about where the boundaries sit. The Royal Institute of British Architects (RIBA) reported in 2025 that AI use among architects’ practices rose sharply, while major concerns included imitation risk and the possibility that AI enables people without sufficient professional knowledge to design buildings.

That is not a moral panic. It is a practical one. 

AI-generated image of a contemporary residential design with a planted flat roof, timber detailing and full-width ground-floor glazing, seen from an elevated rear angle.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

The creativity tax of AI-generated house design

The first casualty of “ChatGPT liked it” is often not the building. It is the designer’s mind. 

Architectural creativity is not only about producing a novel form. It is also the ability to reframe a brief, to notice the constraint that can become an opportunity, and to hold competing values in tension long enough for a third option to appear. Those are slow skills. They are trained through doing, failing, revising, and repeating with intent. 

The problem is not just what AI produces, but what it stops you exploring

Generative AI changes the tempo. It offers an immediate “good enough” option, and immediate options create a subtle pressure to stop looking.

That pressure has a name.

Decades before modern AI, design research defined "design fixation" as a measurable barrier in conceptual design: a blind adherence to an initial set of ideas that limits subsequent exploration.

Now we have early empirical evidence that AI may intensify that fixation. A 2024 experimental study on exposure to AI-generated images during a visual ideation task found that AI support led to higher fixation on an initial example, with participants producing fewer ideas, lower variety and less originality than a control group. 

In experimenting with these tools myself, I have found much the same: generative AI systems exhibit their own version of the problem, closely adhering to learned patterns and resisting the kind of novelty and diversity that makes a design worth pursuing. When the tool is stuck, it tends to keep the person beside it stuck too.

And quantity does not rescue you from that trap. A 2025 experimental study using ChatGPT-4o on a creativity task found the model generated many ideas but showed a fixation bias and struggled to evaluate originality in the way humans can. 

In architectural practice, this shows up in a familiar way: the AI suggests the house you have already seen a thousand times. Or, more precisely, it suggests the statistical centre of the houses it has absorbed. In other words, the tasteful average of “contemporary home” imagery. 

You can feel the convergence in language too. 

Ask five designers to use ChatGPT house design prompts for "a modern family home with Scandinavian warmth" and you will likely get five slight variations of the same plan. That is what pattern-matching does: it makes a strong case for the most probable answer.

This is where AI can hinder creative thinking in a particularly architectural way.

Consider how quickly a “style” becomes a shortcut. In contemporary practice, a style reference might start as a pinboard: a few precedents, a couple of materials, a massing diagram. 

With generative AI, that pinboard mutates into an engine. You don’t just collect images; you manufacture them. And because the images arrive already coherent, the mind skips the step where you ask whether coherence is the same as relevance. 

Generic intelligence struggles most where context matters most

The risk becomes sharper in places where architecture is inseparable from memory, craft and cultural specificity.

Research evaluating AI image generators against examples of Islamic architectural heritage found that the systems could produce visually appealing outputs "inspired" by the tradition but often drifted away from the real architectural structures and details they referenced. The limitation was not visual quality but contextual understanding. 

Swap "Islamic heritage" for "a Victorian terrace street in Bristol" or "a conservation area in Manchester" and the same principle applies. Good contextual design is not a generic visual language; it is a set of local negotiations.

This is one reason planning policy continues to emphasise local distinctiveness, and why the NPPF warns that design expectations must be clarified and tested through engagement, with policies that reflect local aspirations while allowing a suitable degree of variety.

Generative AI can turn "precedent" into "default". The difference matters. Precedent is interrogated: What is it doing? Why does it work there? What are we keeping, and what are we refusing? A default is accepted silently, arriving already packaged.

Local design codes, for all their reputation as standardisation, are often built around exactly that distinction. AI may collapse it without rules, simply through statistical gravity.

Successful design is not only option generation but option rejection: saying no to the plausible so you can pursue the thing that fits. That judgement is inseparable from context that no training dataset can fully absorb.

AI-generated twilight view of a modern open-plan interior, with full-height rear glazing, a sculptural staircase and connected living, dining and kitchen spaces.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

The chatbot will not sign the drawings

If AI’s biggest seductive quality is that it speaks like an expert, its biggest professional risk is that it leaves you holding the liability. 

So the real question is not whether AI belongs in the workflow, but whether the workflow still makes clear who is responsible, what has been checked, and where the evidence sits.

Why the audit trail cannot be outsourced

The Building Safety Regulator has published explicit guidance on duties and competence under the 2023 Building Regulations amendments: clients must make suitable arrangements for compliance, allocate sufficient time and resources, and appoint competent designers and contractors.

In parallel, the "golden thread" concept for higher-risk buildings formalises something that should already be second nature: keep a secure digital record, with version control, and maintain a single source of truth that can demonstrate how the building complies.

That is a direct clash with the casual "paste it into a chatbot" workflow. A golden-thread mindset is about provenance: who made the decision, on what basis, when, and where the evidence lives. Chatbots are, by default, anti-provenance. They are a conversational surface over a statistical system.

How AI creates governance problems as well as design ones

The privacy side is not optional either. Once a project enters a chat interface, UK GDPR principles on integrity and confidentiality are already engaged. Architectural data often contains personal information - names, addresses, health needs, access arrangements, and photographs of interiors. Even small details can become sensitive once a project is in the pipeline.

And when things go wrong, the accountability does not redistribute. The 2024 RIBA AI Report is explicit: AI is not an entity that can be held liable. Architects carry professional indemnity exposure for information produced as a result of AI use.

The quiet danger, then, is not a single catastrophic error. It is drift. 

AI-generated plans for planning applications enter workflows as first drafts and quietly become final ones. A planning statement drafted "just for a first pass" becomes the planning statement. A specification clause added "as a placeholder" becomes the clause. The audit trail becomes muddy.

And muddy audit trails, especially in a post-Building Safety Act world, are not a technical problem. They are a governance problem.

What disciplined use actually looks like

The Planning Inspectorate published guidance in September 2024 on the use of AI in any appeal, application or examination it handles. 

The message is clear: if you use AI to create or substantially change any part of your submission, you must declare it, specify which tools you used, identify the source of the information the AI drew on, and take personal responsibility for the factual accuracy of everything you submit. Improper use could be treated as unreasonable behaviour and may result in an award of costs against you.

That is not guidance sitting in a drawer waiting to be tested. It is already being applied in live appeals. Reviewing recent Planning Inspectorate decisions, we have come across cases where inspectors have directly questioned whether submissions were AI-generated. 

In one appeal, an inspector noted serious concerns that a Statement of Case had been produced using AI, observing that undeclared use would, in their view, amount to unreasonable behaviour. The unusual layout, phraseology, and scope of the document, combined with the absence of any identified planning professional as author, made those concerns difficult to dismiss. 

In the inspector's own words: "I have serious concerns that the SoC was produced using AI, something which undeclared, would in my view amount to unreasonable behaviour." 

To be clear: these are not our cases, and we do not think it appropriate to name the applicants involved. But they are a warning worth heeding. The Planning Inspectorate's guidance is not a future intention. It is a formal disclosure obligation already inside the planning process, and as these appeals make clear, inspectors are already prepared to act on it.

The RIBA's 2025 update confirms that anxiety about exactly this is rising across the profession alongside adoption. The real question is no longer whether to use AI, but how to use it without weakening judgement or asking it to do regulatory work it cannot reliably do.

So what does disciplined use actually look like?

Treat the chatbot like a junior assistant. It may help with first drafts, early options, summarising documents and surfacing issues to check. But it cannot carry professional responsibility. If it produces five versions of the same house, take that as a warning, not a resolution. Change the constraints, rethink the typology, and step outside the model's visual defaults.

On compliance, the same discipline applies. Do not ask a chatbot whether a design works. Ask it what you need to check, then verify every point against authoritative sources, proper building regulations analysis and specialist input. Keep every compliance-critical decision in the project record. If you cannot reconstruct the reasoning behind a key design decision without opening a chat log, your audit trail is already too weak.

The government's own approach to AI in planning reflects exactly this dividing line. 

MHCLG has awarded Google Cloud a £6.9 million contract for an augmented decision-making planning tool focused on householder applications, with the goal of speeding up routine processing rather than replacing planning judgement. That sits alongside the government's Extract tool for digitising historic planning records and Greater Cambridge’s PlanAI trials, where AI summarises consultation responses while planners retain the decisions. In every case, AI assists. It does not decide.

And if you still want the chatbot to admire the house, fine. Let it praise the roofline and the material palette. But stop there. It is not responsible for whether the house secures planning permission, complies with regulations, or protects the people inside it.

Admiration is easy. Accountability is not.

AI-generated sketch-style axonometric view of a contemporary house floor plan, showing an open-plan living area, kitchen, bathroom and spiral staircase in a conceptual layout.
This image is AI-generated and was created specifically for this article as part of an experiment testing how convincing AI-produced architectural designs can appear before real planning, design and technical scrutiny begins.

A note for clients using AI to write project briefs

For clients looking for a team of architects and town planners for architectural design and planning application services in the UK, there is a related point worth making. In recent months, we have seen a noticeable increase in enquiries written with the help of AI.

That is not necessarily a bad thing. Used properly, AI may help people organise their thoughts, frame a project more clearly and approach the process with greater confidence. The problem starts when it stops being a tool and starts becoming the voice of the enquiry.

In the past, an initial brief was often just a few sentences, or at most a page. Far from being a weakness, that was often a strength. It gave us room to ask the right questions, shape the brief together and uncover what the client was really trying to achieve. Those early conversations were often among the most valuable parts of the process.

Now, by contrast, we sometimes receive pages of AI-generated project narrative, and occasionally AI-generated house plans, as the very first enquiry. Much of it does not read naturally. Much of it is oddly structured. And much of it is surprisingly poor at communicating the points that actually matter: what the client wants to build, why they want to build it, what constraints they already know about, and where they need professional help. In trying to sound comprehensive, these briefs often become less clear, not more.

That can create an unintended problem. If the first message feels overly engineered, generic or detached from the real project, it may become harder for an architectural practice to engage with it properly. The enquiry may appear polished, but the substance is buried. What should have opened a productive conversation instead begins to obscure it.

It reminds me of something one of our existing clients experienced when looking for bespoke furniture. They approached a highly specialised maker, only to receive a reply along the lines of, “This is too bespoke for us.” The issue was not that the idea lacked merit. It was that the brief had become so overworked, so over-specified and so far removed from the natural starting point of a collaborative design process that it became harder, not easier, for the right professional to respond.

The same risk applies to architectural enquiries. A brief does not need to sound polished to be useful. It needs to be clear. It needs to be honest. And it needs to tell us, in straightforward terms, what you are trying to achieve.

My advice is simple: use AI lightly. Let it help you think, but do not let it speak in your place. The best first brief is rarely the longest or the most polished. It is the one that sounds like a real person, describing a real project, in a way that gives the right professional something solid to respond to.

Ufuk Bahar, Founder and Managing Director of Urbanist Architecture
AUTHOR

Ufuk Bahar

Urbanist Architecture’s founder and managing director, Ufuk Bahar BA(Hons), MA, takes personal charge of our larger projects, focusing particularly on Green Belt developments, new-build flats and housing, and high-end full refurbishments.

Send me a message
Or call me on
020 3793 7878

Write us a message

We look forward to learning how we can help you. Simply fill in the form below and someone on our team will respond to you at the earliest opportunity.

Have you considered how much the construction will cost?

Urbanist Architecture is committed to protecting your privacy, and we'll only use your information to deliver the services you requested. For more information, please review our privacy policy.

Some fields are incorrect.

Read next

The latest news, updates and expert views for ambitious, high-achieving and purpose-driven homeowners and property entrepreneurs.

Read next

The latest news, updates and expert views for ambitious, high-achieving and purpose-driven homeowners and property entrepreneurs.

Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
UK planning reforms: Looking back on 2021 and forward to 2022
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
Labour's planning reforms and housing policy [June 2024 update]
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
6 simple steps to design a Gatsby house with art deco design principles
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
Minimum space standards for new homes [2026 update]
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
Key principles transforming healthcare architecture and design
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
Heritage impact assessment: Why do you need one? [2026 update]
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
Four common reasons for HMO planning application refusal
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
10 Steps to design and build your own house [UK edition]
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
How Green Belts drive housing prices: The real cost of 'protection'
Read more
Image cover for the article: AI-generated 3D section view of a contemporary two-storey house with a flat green roof, large glazed openings and a landscaped garden, set between traditional brick suburban homes.
How to Write Winning Design & Access Statements
Read more

Ready to unlock the potential of your project?

We specialise in crafting creative design and planning strategies to unlock the hidden potential of developments, secure planning permission and deliver imaginative projects on tricky sites

Write us a message
Decorative image of an architect working
Call Message