![]() ![]() These technologies, which include facial recognition, are increasingly used to identify people in real-time and from a distance, potentially allowing unlimited tracking of individuals. Biometric technologiesĪn increasingly go-to solution for States, international organizations and technology companies are biometric technologies, which the report states are an area “where more human rights guidance is urgently needed”. “This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,” she added. It found that the biased datasets relied on by AI systems can lead to discriminatory decisions, which are acute risks for already marginalized groups. The report also stated that serious questions should be raised about the inferences, predictions and monitoring by AI tools, including seeking insights into patterns of human behaviour. “Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face,” Ms. ![]() The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant, it argues, adding that long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways. ![]() The document details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways. It states that there have been numerous cases of people being treated unjustly due to AI misuse, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition software. ![]() OHCHR Director of Thematic Engagement, Peggy Hicks, added to Mr Engelhardt’s warning, stating “it's not about the risks in future, but the reality today. Without far-reaching shifts, the harms will multiply with scale and speed and we won't know the extent of the problem.” Failure of due diligenceĪccording to the report, States and businesses often rushed to incorporate AI applications, failing to carry out due diligence. Whilst welcoming “the European Union’s agreement to strengthen the rules on control” and “the growth of international voluntary commitments and accountability mechanisms”, he warned that “we don’t think we will have a solution in the coming year, but the first steps need to be taken now or many people in the world will pay a high price”. The situation has “not improved over the years but has become worse” he said. The situation is “dire” said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section, who was speaking at the launch of the report in Geneva on Wednesday. The document includes an assessment of profiling, automated decision-making and other machine-learning technologies. The High Commissioner’s call came as her office, OHCHR, published a report that analyses how AI affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression. Bachelet told the Council of Europe's Committee on Legal Affairs and Human Rights, in reference to the widespread use of spyware commercialized by the NSO group, which affected thousands of people in 45 countries across four continents. The Pegasus revelations were no surprise to many people, Ms. She was speaking at a Council of Europe hearing on the implications stemming from July’s controversy over Pegasus spyware. ![]()
0 Comments
Leave a Reply. |