overview of a circuit in red light, representing responsible use of AI

RICS has launched a new, landmark global standard on the responsible use of AI. Co-author and RICS senior standards assurance specialist Darius Pullinger discusses how it aims to tackle complacency and put humans at the heart of decision making

AI has been changing the shape of the built environment industry over the last few years and it is anticipated that the profession will need to adapt to different and new roles.

More recently, however, the convergence of various technical, social and professional conditions has meant that a particularly sharp impact has been felt in a number of sectors, such as building surveying and construction.

That industry context required an appropriate collective response in order to put the profession on a secure footing in anticipation of the continuing advance of this technology. That response has now come in the form of Responsible Use of AI – a new, global professional standard published by RICS.

Understanding responsible use of AI

There are a number of underlying themes to the standard that have ultimately informed the shape of its provisions and that are, consequently, also key to understanding them.

This article will briefly discuss these principal themes – of removing complacency, human-in-the-loop and strengthening public trust and confidence.

Removing complacency

Bold promises are made in respect of AI technology as it penetrates further into the everyday work of the surveyor yet at the same time, as in many industries, there is a degree of illiteracy within the profession about this technology.

Complacency therefore presents itself as a challenge, with the potential to quickly lead to corrosion of the reputation of the profession, given the proximity of AI use to surveyor decision-making and advice coupled with the real fallibility of this technology. There are two principal ways that the standard seeks to address complacency: upskilling and awareness.

Upskilling is vital, simply because a tool – particularly complex tools – cannot be used well in ignorance. Thus, the standard sets a baseline of knowledge that should allow practitioners to make informed decisions regarding the procurement of AI, to be better able to scrutinise the outputs from AI and also to be able to work with technicians (eg data scientists) in respect of AI tools. These skills foster a real engagement with the technology, which in turn ensures the future relevance and security of the profession.

Awareness in the round of how this technology is integrated into one’s business and how it is used in the delivery of services is also essential to removing complacency.

The standard therefore seeks to promote a deep understanding of firm practice and why AI is being used within processes (via system governance) and it requires regular review of the varied risks posed by the use of AI, including risks associated with missing or limited information about an AI system (via risk management), as well as seeking to enable firms to make clearly justified decisions on what AI systems are used and for what purpose (via due diligence requirements).

Human-in-the-loop

At present in this context, AI “reasoning”, and the expression of that reasoning, is predominantly produced via probabilistic/statistical calculation, which is a fundamentally different process to that of a human client.

Ensuring that an AI output has sufficient sense and meaning to be used effectively in service delivery therefore requires the scrutiny of a human professional, who can apply their knowledge, skills and (professional) scepticism to the output, and who must also make and document a decision about output reliability.

Decisions regarding reliability cannot be made without a depth of professional subject matter expertise, as well as some baseline knowledge of the AI system; thus the surveyor is at the core of the standard, since it is their informed use of an AI system, alongside their professional experience and scepticism, that enables the relevance and effectiveness of an AI output in the delivery of surveying services.

Strengthening public trust and confidence

It is important to remember that AI technology is not an end in itself but a means of achieving better outcomes for the profession, clients and the public. Ensuring that surveyors are properly engaged with the technology and its role in the delivery of services – as outlined above – is at the heart of how the standard secures the public advantage of the profession.

Moreover, the standard requires that an appropriately qualified and named surveyor accepts responsibility for decisions about output reliability. The standard further includes specific provisions concerning transparency and explainability – the former requiring members and firms to describe to clients when and for what purpose AI is to be used, before it is used; the latter requiring firms, on client request, to provide certain  information about the AI used.

By keeping in mind these themes when reading the standard, its provisions can be interlinked and understood as aimed at keeping the profession useful and ensuring its future security as it – and other dimensions of social life – undergoes change from dramatic technological advances.

The post Using AI responsibly in the surveying profession appeared first on Planning, Building & Construction Today.

Leave a Reply

Your email address will not be published. Required fields are marked *

Using AI responsibly in the surveying profession
Close Search Window