Industrial technician using mobile app to query data from AI chatbot. Smart factory worker consulting artificial intelligence for problem solving. Future construction communication. AI in construction is prophesised to bring productivity gains- but arguably the greatest transformation could lie in how risk is identified

Much of the talk around AI in construction has focused on anticipated productivity gains but arguably the greatest transformation could lie in how risk is identified, decisions are justified and responsibility is allocated, writes Mark Macaulay of Dentons

Artificial intelligence (AI) is already reshaping how construction projects are designed, procured, delivered and operated.

While much attention has been paid to AI’s anticipated productivity gains and innovation, an arguably more significant transformation lies in how risk is identified, decisions are justified and responsibility is allocated.

For contractors, developers and their advisers, AI is not merely a new toolset but a gradual, fundamental reconfiguration of established legal, contractual and professional assumptions.

Contractors sit at the operational centre of construction delivery, and it is here that the impact of AI is often felt most immediately.

Operational change

AI-driven planning, scheduling and analytics tools are increasingly capable of analysing historic project data alongside live inputs to predict delays, resource conflicts and cost pressures well before they become apparent on site.

These systems draw on data sets derived from multiple completed projects, enabling pattern recognition at a scale no individual team could replicate.

This represents a shift from reactive problem-solving to predictive control. Labour deployment, plant utilisation and material sequencing are no longer optimised solely through experience and intuition, but through algorithms that continuously learn and adjust.

Large infrastructure projects already combine digital programme management with real-time site data to anticipate disruption – an approach that proffers both opportunities and challenges since, in data-rich delivery environments, where information is available earlier, the expectation to act on it intensifies.

Risk implications

Predictive possibilities raise a fundamental legal question: where a contractor has access to predictive insight but fails to act on it, does that omission constitute a breach of duty?

As AI tools become widely adopted, the benchmark for what constitutes “reasonable skill and care” is likely to evolve.

Courts and adjudicators already place weight on contemporaneous records and available knowledge, so where data-driven foresight exists, ignorance may no longer provide a credible defence.

Contractors may increasingly be judged not only on outcomes but on whether foreseeable risks were identified and addressed in a timely manner.

Developers, meanwhile, are adopting AI earlier in the project lifecycle, particularly at feasibility and design stages.

AI enables rapid scenario testing across cost, programme, sustainability and operational performance. Design options that once required weeks of modelling can now be assessed in hours.

This allows developers to make earlier and more informed decisions about viability, risk and return.

These tools are already being used to support site selection, density modelling and carbon optimisation, particularly in large residential and mixed-use schemes.

The result is greater confidence at the point of investment, but also a more detailed evidential trail.

Governance and reliance

As AI-generated analysis feeds into investment decisions, boards and funders will increasingly ask: what data was used, who validated the outputs and to what extent was human judgement applied?

Developers will need robust governance frameworks to show that AI informed decisions, rather than replaced accountability.

AI-assisted design tools are becoming embedded in Building Information Modelling (BIM) environments, offering clash detection, regulatory checks and constructability reviews in near real time.

Where AI tools can identify non-compliance or design risk at an early stage, failure to deploy them may become increasingly difficult to justify. At the same time, excessive reliance on automated outputs without appropriate professional verification creates a different exposure.

This mirrors earlier shifts in expectations around BIM adoption, where the absence of digital coordination became harder to defend as industry practice evolved.

Consultants will need to strike a careful balance between efficiency and oversight, ensuring professional judgement remains visible, documented and defensible.

Redrafting risk allocation

AI does not sit neatly within traditional construction contracts but its growing use is exposing areas of contractual ambiguity.

Key emerging issues include liability for AI-informed decisions. If a programme delay or cost overrun arises from flawed AI forecasting, responsibility may lie with those who relied on, validated or ignored that forecast.

There are also questions over data ownership and quality. AI outputs are only as reliable as the underlying data and contracts increasingly need to address who owns project data, who warrants its accuracy and how it may be (re)used.

Disclosure obligations also present issues, as predictive insight may trigger earlier duties to warn, particularly under collaborative contract forms such as NEC, where early warning mechanisms encourage proactive risk management.

It is likely contracts will move away from treating AI as background software and instead address its use expressly, particularly on complex or data-intensive projects.

From a health and safety perspective, AI-enabled computer vision and sensor technologies are already being deployed to monitor sites for unsafe practices, missing personal protective equipment (PPE) and hazardous conditions.

These systems can identify patterns of behaviour that correlate with higher accident risk.

Legal implications

As AI-type technologies become more common, regulatory expectations may shift. Enforcement bodies may increasingly ask not only what happened after an incident but also what the system knew beforehand.

The evidential value of AI-generated safety data will grow accordingly. Used well, it may demonstrate proactive compliance. Ignored, it may expose organisations to enhanced scrutiny and liability.

AI’s capacity to optimise design for embodied carbon, operational energy use and whole-life performance not only helps developers and asset owners substantiate sustainability but also reduces tolerance for vague or untested commitments.

Construction is moving towards a world in which problems are predicted rather than discovered. In that world, the central question is no longer whether a risk could have been foreseen but why it was not acted upon.

Legal advisers will increasingly be required to translate technical capability into contractual clarity, ensuring innovation does not outpace control and that new tools are integrated into existing legal frameworks in a defensible way.

The post Identifying risks in construction with AI appeared first on Planning, Building & Construction Today.

Leave a Reply

Your email address will not be published. Required fields are marked *

Identifying risks in construction with AI
Close Search Window