a lesson on the need for stricter regulation of artificial intelligence

Disturbing images emerged this week of a chess game robot breaking finger of a seven-year-old boy during a tournament in Russia.

Public comments on this event highlight some concern in the community about the increasing use of robots in our society. Some people joked on social media that the robot was a “sore loser” and had a “bad temper.”

Of course, robots can’t express real human characteristics like anger (at least, not yet). But these comments demonstrate a growing concern in the community about the “humanization” of robots. Others noted that this was the beginning of a robotics revolution, evoking images many have of robots from popular movies like robocop Y the terminator.

While these comments may have been made in jest and some images of robots in popular culture are exaggerated, they highlight the uncertainty about what our future with robots will look like. We should ask ourselves: are we prepared to deal with the moral and legal complexities posed by the interaction between humans and robots?

Interaction between humans and robots.

Many of us have basic forms of artificial intelligence in our homes. For example, robotic vacuum cleaners are very popular household items in Australia and help us with tasks we would rather not do ourselves.

But as we increase our interaction with robots, we must consider the dangers and unknown elements in the development of this technology.

Examining the Russian chess incident, we might wonder why the robot acted the way it did. The answer to this is that robots are designed to operate in situations of certainty. They do not handle unexpected events well.

So, in the case of the boy with the broken finger, the Russian chess officials fixed the incident occurred because the child β€œviolated” safety rules by taking his turn too quickly. One explanation for the incident was that when the boy moved quickly, the robot mistakenly interpreted the boy’s finger as a chess piece.

Whatever the technical reason for the robot’s action, it shows that there are particular dangers in allowing robots to interact directly with humans. Human communication is complex and requires attention to voice and body language. Robots are not yet sophisticated enough to process those signals and act appropriately.



Read more:
Researchers trained an AI model to ‘think’ like a baby, and suddenly it excelled


What does the law say about robots?

Despite the dangers of human-robot interaction demonstrated by the chess incident, these complexities have yet to be adequately considered in Australian law and policy.

A fundamental legal question is who is responsible for the actions of a robot. Australian consumer law sets strict requirements for product safety for goods sold in Australia. These include provisions for safety standards, safety warning notices, and manufacturer liability for product defects. Using these laws, the manufacturer of the robot in the chess incident would normally be liable for the harm caused to the child.

However, there are no specific provisions in our product laws related to robots. This is problematic because the Australian Consumer Law provides a defending to liability. This could be used by robot manufacturers to evade their legal liability, as applies if

the state of scientific or technical knowledge at the time the manufacturer supplied the goods did not allow such a safety defect to be discovered.

In a nutshell, the robot manufacturer could argue that it was not aware of the safety flaw and could not have been. It could also be argued that the consumer used the product in an unexpected way. Therefore, he would say that more specific laws that deal directly with robots and other technologies are needed in Australia.

Law reform bodies have done some work to guide our legislators in this area. For example, the Australian Human Rights Commission issued a landmark Human Rights and Technology Report in 2021. The report recommended the Australian government establish an AI security commissioner focused on promoting security and protecting human rights in the development and use of AI in Australia. The government has not yet implemented this recommendation, but it would provide a way for robot manufacturers and suppliers to be held accountable.

Implications for the future

This week’s chess robot acts have demonstrated the need for more legal regulation of artificial intelligence and robotics in Australia. This is particularly so as robots are increasingly used in high-risk environments such as elderly care and help people with disability. Sex robots are also available in Australia and are very human in appearance, raising ethical and legal concerns about the unforeseen consequences of its use.



Read more:
Six Ways Robots Are Used Today That You Probably Didn’t Know About


The use of robots clearly has some benefits for society: they can increase efficiency, fill staff shortages, and perform dangerous work on our behalf.

But this topic is complex and requires a complex answer. While a robot breaking a child’s finger might be seen as unique, it shouldn’t be ignored. This event should prompt our legal regulators to implement more sophisticated laws dealing directly with robots and AI.

Leave a Comment