So why don’t we trust such technology much?
One of the reasons “fairness” has a collectively strong, underlying meaning, the argument of Professor Jeena Neph at the University of Cambridge.
“Right now, in many areas where AI is touching our lives, we feel that humans understand the context much better than the machine,” he said.
“The machine decides based on the set of rules, which has been programmed to postpone it. But people are really good to include many values and ideas outside – what is the right call, cannot feel like a fair call.”
Prof. NEF believes that to frame this debate whether humans or machines are “better”, either not appropriate.
“This is the intersection between the people and the systems that we have to meet,” he said.
“We have to make the best use of both for making the best decision.”
The human oversite is a foundation stone of “responsible” AI. In other words, deploying the technique as appropriate and safe as possible.
This means, somewhere, to monitor what machines are doing.
It is not that it is working very smoothly in football, where Var – Video Supporting Referee – has been a long -term controversy.
For example, it was officially declared a “significant human error”, resulting in a referee failed to improve a wrong decision, when Tottenham played Liverpool in 2024, when it was an important target that when it was not and a barrage of anger was removed.
The Premier League stated that VAR was 96.4% accurate during the “major match events” last season, although Chief Football Officer Tony Scols admitted “a single error could cost clubs”. Norway is called on the verge of closing it.
Despite human failures, an alleged reduction of human control plays its role in our frugality to rely in general, says entrepreneur Azam Azhar, who writes Tech Newsletter the Experience.
In an interview with the World Economic Forum, he said, “We don’t think we have an agency in its size, nature and direction.”
“When the technology starts changing very fast, it forces us to change our own beliefs very quickly because the system we used earlier did not work even in the new world of this new technology.”
Our spirit of technology does not apply just to sports. The first time I saw a demo of an early AI device trained to see the initial signs of cancer from the scan, it was great (it was a few years before today’s NHS tests) – was quite accurate than the human radiologist.
The issue, its developers told me, that people were being told that they did not want to do cancer that a machine had diagnosed it. They wanted the opinion of human doctors, preferably many of them, before they would accept it.
Similarly, autonomous cars – with no human driver on the wheel – have worked at millions of miles on the streets in countries like America and China, and data shows that they have statistically less accidents than humans. Nevertheless, a survey conducted by YouGov last year suggested that 37% of the Brits would feel “very unsafe” inside.
I have been in many and when I did not feel insecure, I did – after the novelty – started feeling a little bored. And perhaps it is also in the heart of debate about the use of tech in referee sport.
“What [sports organisers] Sports Journalist Bill Elliot – Editor of Golf Monthly says, “Trying to achieve them, and what they are achieving.
“You can make an argument that perfection is better than imperfection, but if life was perfect then we will all be bored with death. So it is one step ahead and a step in a different kind of world is also side by – an ideal world – and then we are shocked when things go wrong.”