What does the public really think about AI?
A major new survey from the Ada Lovelace Institute and the Alan Turing Institute provides valuable insights into how the UK public feels about artificial intelligence, with findings that are particularly relevant for translators, interpreters and the clients who use their services.
The public clearly sees value in AI tools, but they also understand the need for human judgement, transparency and accountability. Two-thirds of respondents reported experiencing AI-generated harms such as false information, financial fraud and deepfakes. This creates an opportunity for language professionals to position themselves not just as linguists, but as trusted advisors who can help clients understand when AI is appropriate, what its limitations are, and how to ensure quality and accuracy. Speed and efficiency are seen as AI's main benefits, but these benefits come with serious trade-offs. Clients increasingly need guidance on managing the trade-offs, mitigating the risks, and building workflows that combine the efficiency of technology with the judgement and accountability that only human professionals can provide. This is expertise that our profession is well placed to offer.
Context matters
The nationally representative survey of 3,513 UK residents, conducted between October and November 2024, found that while 40% of the public have now used large language models (LLMs) such as ChatGPT, concerns about AI are rising, and there is strong public demand for regulation and human oversight.
The survey found that 61% of the UK public have heard of LLMs, demonstrating rapid awareness growth for technology that only entered mainstream discussion in late 2022. However, public openness to these tools varies significantly depending on context.
While 67% of people have used, or are open to using, LLMs for searching for answers and recommendations, this drops to 53% when it comes to using them for job applications. This suggests the public recognises that the stakes matter when AI is involved, and that some tasks warrant greater caution than others.
Rising concerns
One of the most significant findings is that concerns about AI have increased since the previous survey in 2022/23, while perceptions of benefit have remained stable. In the earlier survey, benefits outweighed concerns for five of the six technologies examined. In the current survey, benefits outweigh concerns for only three.
When asked about their concerns, the public most commonly identified overreliance on technology rather than professional judgement, the risk of errors, and lack of transparency in decision-making. Even for applications where people see clear benefits, they still hold concerns. For example, 64% of respondents worried about the loss of professional judgement due to overreliance on technology in cancer detection, despite this being one of the most positively viewed applications of AI.
The researchers note that people hold nuanced views, simultaneously seeing both benefits and concerns, and that their attitudes vary depending on the specific context in which AI is used.
AI-generated harms
The survey revealed that the public has significant first-hand experience of AI-related harms. Two-thirds of respondents (67%) reported experiencing some form of harm a few times, while over a third (39%) said they had encountered harm many times.
The most common harms people reported experiencing were false information (61%), financial fraud (58%) and deepfakes (58%). For anyone advising clients on AI use, these findings underscore the importance of verification and the ongoing need for human expertise to identify misinformation and errors.
Expectations for regulation
The survey found growing public support for AI regulation. 72% of respondents said that laws and regulation would increase their comfort with AI technologies, up from 62% in the previous survey. This rise in demand comes at a time when the UK does not have comprehensive AI legislation in place.
Beyond general regulation, the public expressed specific expectations about accountability and transparency. 65% said that procedures for appealing decisions made by AI would make them feel more comfortable, and 61% wanted access to information about how AI systems make decisions about them.
The public also expressed clear views about who should be responsible for AI safety. 58% believe both an independent regulator and AI companies should share responsibility, with the majority (over 75%) feeling it is very important for government or independent regulators to have safety powers, rather than leaving this to private companies alone. Interestingly, younger people (aged 18-44) were more inclined to favour company responsibility, while those over 55 preferred regulators, reflecting different levels of trust across age groups.
Demographic variances
The survey also revealed that attitudes to AI vary across demographic groups. Black and Asian respondents expressed higher levels of concern about facial recognition in policing (57% and 52% respectively) compared to 39% in the general population. People on lower incomes consistently reported seeing AI technologies as less beneficial than those on higher incomes, even when other demographic factors were held constant.
These disparities suggest that the impacts and risks of AI are not experienced equally across society, and that some communities have well-founded reasons for greater caution.
Conclusion
This research offers valuable context for translators and interpreters advising their clients on the use of AI tool. The message from the public is clear: they see AI's potential, but they also understand the need for human oversight to ensure accuracy and accountability.
The full survey findings are available at attitudestoai.uk.
The research was summarised using Claude.ai