Cambridge Festival 2023 – issues of trust, truth and safety in tech

Issues of safety, trust and truth tackled during tech-related events at the Cambridge Festival Item tracking devices and abusive stalking, the future of online safety, hate speech and the role of AI, automated fact-checking, and the development of AI empathy are part of a series of...

Issues of safety, trust and truth tackled during tech-related events at the Cambridge Festival

Item tracking devices and abusive stalking, the future of online safety, hate speech and the role of AI, automated fact-checking, and the development of AI empathy are part of a series of events relating to technology at this year’s Cambridge Festival.

One of the largest festivals of its kind in the country, the annual Cambridge Festival, which features over 360 mostly free events, runs from Friday 17 March until Sunday 2 April.

Several talks and debates delve into the threats and opportunities posed by modern technology. Many of the events take place during the open day at the Department of Computer Science and Technology at University of Cambridge on 18 March, during which researchers discuss and demonstrate some of the many pioneering projects they are currently working on. 

In ‘HOW DO YOU KNOW WHERE I AM?’ HOW ITEM-TRACKING DEVICES ARE USED FOR STALKING (18 March), Kieron Ivy Turk describes how abusive people are using tracking devices to stalk their partners. They also show whether the anti-stalking features of these devices are effective or not. In a recent study, they found that very few participants used them, as most were unaware that these anti-stalking features existed. And where participants did use anti-stalking features, they did not prove to be that effective.

Kieron Ivy Turk is a PhD student whose research focuses on tackling the use of technology for domestic abuse. They are analysing how domestic abusers misuse systems for abuse. The goal of this research is to be able to identify ways to mitigate various technology-enabled abuse threats, reducing the harm done to victims and survivors of abuse.

Credit: Mascha Tace (Shutterstock)

Kieron Ivy Turk said: “Technology-enabled abuse takes many forms, from using smart doorbells to monitor deliveries, guests, and the victim’s location to controlling what the victim can do on their own devices. There is a wide range of problems that need to be tackled in this space: there are issues of raising awareness of how technology is used for abuse and identifying when it is happening; there is the problem of impeding or preventing the various misuses of technology; and there are many areas in which abuse victims can be better supported as they attempt to leave an abusive relationship.”

The issue of online criminality is the focus of another talk, WOULD YOU TRUST A CYBERCRIMINAL? (18 March). Cybercrime is facilitated by anonymous online environments – yet the degree of specialisation required often means there is a need to trade and collaborate with others. This poses a problem: why trust those who are inherently untrustworthy? Dr Alice Hutchings, Director of the Cambridge Cybercrime Centre, explores issues relating to trust and anonymity as they relate to online marketplaces and forums.

Credit: Studio_G (Shutterstock)

In POLARISATION, HATE SPEECH AND THE ROLE OF AI (17 March), Dr Stefanie Ullmann explores how AI contributes to the spreading of hate speech and radical ideas online, but also how it can be used to interrupt this process and prevent further polarisation. While, on the one hand, we are confronted with feed algorithms, recommender systems, and echo chambers, we can, on the other hand, make use of techniques such as redirect search, digital nudging, quarantining and automated counter-speech to disrupt the development of radicalisation and polarisation online. Dr Ullmann discusses the development of a new automated counterspeech system that has much better results compared to other systems. For example, compared to common dialogue systems, such as Meta’s Blenderbot 3, this new counterspeech generator produces much more detailed and diverse responses.

Dr Ullmann believes that such systems hold great potential to fight online hate speech without censorship. She commented: “It would be ideal if social media companies were to integrate such generators into their platforms and thereby enable any user to become a counterspeaker.”

Hate speech and propaganda, fake news and misinformation are again tackled in AI TRUTH TELLERS: FACT OR FICTION? (18 March). To combat these issues, we need reliable ways to check the truth of what we are being told. Professor Andreas Vlachos and a team of researchers demonstrate state-of-the-art software they are currently developing to automatically verify claims made in text.

Credit: Andrey Suslov (Shutterstock)

The future of online safety is also discussed in BEYOND ONLINE SAFETY: AI, WEB3 AND THE METAVERSE (20 March). The Online Safety Bill gives social media companies a duty of care towards their users. But even as the Bill is making its way through parliament, the technology landscape continues to evolve. With advances in AI, increasing adoption of web3 technologies like blockchain, and big tech investing billions in building the all-encompassing virtual world known as the metaverse, how should policymakers respond? The panel consider emerging forms of technology, the unintended online harms they might produce and what can be done to mitigate them. With Ramsay Brown, CEO, AI Responsibility Lab; Sam Gilbert, Affiliated Researcher at Bennett Institute for Public Policy; and Alison Kilburn, Director of Analysis, Department for Digital, Culture, Media and Sport. The event chair is Dr Julian Huppert, Director of the Intellectual Forum at Jesus College Cambridge.

Credit: MR Neon (Shutterstock)
Credit: blossomstar (Shutterstock)

In a related event, another panel asks whether the big tech firms, whose income is higher than many countries’ GDP, are now acting like the colonialists of the past as they assert their power in both space and on Earth? Are governments able to restrain them? These questions and more are debated in BIG TECH: THE NEW COLONIALISTS? (29th March). Speakers: Professor Jaideep Prabhu from the Judge Business School, Sebastián Lehuedé from the University of Cambridge and Harvard, Gates Cambridge Scholar Alina Utrata and Jennifer Cobbe, a Senior Research Associate in the Department of Computer Science and Technology.

One area of research that is causing much controversy is whether AI could or even should be taught empathy. In ARTIFICIAL INTELLIGENCE: CAN SYSTEMS LIKE CHAT GPT AUTOMATE EMPATHY? (31 March), Dr Marcus Tomalin considers some of the social and ethical implications of creating automated systems that imitate human-like empathetic responses convincingly despite having no actual capacity for empathy. Dr Tomalin explains how these systems, such as Chat-GPT, Siri and Alexa, work and how they are designed to seem empathetic.

There are also several events demonstrating the latest in robot-human interaction:   

  • MEET NAO, THE ROBOT HELPING ASSESS MENTAL WELLBEING IN CHILDREN –parents and children can meet researchers who are interested in developing ‘social robots’ that can successfully interact with humans and carry out tasks such as helping us assess and manage our wellbeing. They also get to meet Nao, the children’s wellbeing robot.
  • TRY A POSITIVE PSYCHOLOGY SESSION… WITH A ROBOT! – this event offers a brief one-to-one taster session with the robot wellbeing coach researchers are developing. The researchers talk about their work looking at how a robot can engage a person in a wellbeing practice and how it can express empathy, provide feedback, and instruct and demonstrate in a personalised way.
  • TRY A ‘ROBODIARY’ SESSION WITH PEPPER THE ROBOT – researchers are working to create emotionally intelligent robots that can help make us more resilient to life’s challenges. This event offers a 15-minute, one-to-one taster session with Pepper, the robodiary, where visitors can sample what it will be like to work through a ‘Best Possible Self’ journalling exercise.
  • HOW A SWARM OF ROBOTS INTERACTS WITH HUMANS: A DEMONSTRATION – researchers are looking for new ways to induce AI agents – such as robots, machines and driverless cars – to achieve common goals while working in shared spaces like warehouses and roads. In this demonstration, they show what happens when a group of robots, working in formation, must interact with a human.
  • A VIRTUAL FOOTBALL GAME WITH AI FOOTBALLERS – in a related event to the above, the researchers demonstrate a virtual football game involving a team of AI footballers. Using their in-house, multi-agent simulator, they show how the AI footballers learn to play with each other and explain how the simulator is helping in AI research.

As part of its open day, the Department of Computer Science and Technology is also offering visitors the chance to TOUR the first computer science department in the UK. Highlights of the tour include everything from relics of EDSAC to the many areas of computer science and technology where we have made, and are making, major contributions today. These include the development of programming languages and operating systems; computer hardware, software and architecture (the interface between the two); pioneering new cybersecurity technology; AI agents – and emotionally intelligent robots.

And finally, there are two events on 18 March relating to Sonic Pi – a new kind of instrument for a new generation of musicians. In HOW TO CODE LIKE A DJ WITH SONIC PI: INTRODUCTION & LIVE DEMO, Sam Aaron, the creator of Sonic Pi, demos how to use it. This is followed by HOW TO CODE LIKE A DJ WITH SONIC PI: HANDS-ON WORKSHOP, during which participants get to code fresh beats, driving bass lines and shimmering synth riffs.

To view the full programme and book events, please visit the Festival website:

Follow us