Couriers allege facial ID software used by Uber is ‘indirectly racist’

[ad_1]

Graphical_Bank / Shutterstock

Drivers and couriers working with the Uber app have claimed they have been denied access because the company’s facial identification software is not consistently capable of recognising their faces and is indirectly racist.

According to a report on Wired couriers were permanently fired, threatened with being fired and had accounts frozen because selfies they had taken failed the company’s ID check.

The company’s photo comparison tool uses facial identification software to compare pictures of couriers with photos held on its database when the contractors open the app, to prove they are the person who has logged on.

Uber Eats couriers and Uber drivers in the UK have been affected by the software issue.

In some cases workers with a record of making thousands of deliveries with a 100% satisfaction rate said they were automatically removed from the platform, in a process they allege was automated without any right to appeal. Wired stated that many had asked for their jobs back but had received a message saying their termination was “permanent and final” and that Uber hoped they would “understand the reason for the decision”.

One courier was threatened with the sack after submitting his selfie having shaved his beard off. He was asked to supply details of the person who had replaced him – a process that presumes the contractor has illegally subcontracted their shifts to another person. He was only able to get back on the app after taking the story to a journalist who called Uber’s press team.

Concerns over people with no legal right to work or who were working without background checks led Uber to add the identification step, partly under pressure from Transport for London (TfL) which withdrew Uber’s licence in the capital over fears about safety.

The identification software used by Uber, according to Wired, has a history of failing to identify people with darker skins. In 2018, the online publication said, a similar version of the software used by Uber was found to have a failure rate of 20.8% for darker-skinned female faces and 6% for males. Yet for white men the figure was 0%.

A Harvard report last year found that black people were effectively discriminated against in the US by use of facial recognition software, and an MIT technology review in late 2019 cited a report that showed more than 200 face recognition algorithms – a majority in the industry – had worse performance on non-white faces.

Uber emphasised that it used the verification check to protect against “potential fraud”. A spokesperson said any decision to remove partners from the platform “always involves a manual human review” and “anyone who is removed can contact us to appeal the decision”. The spokesperson did not comment on whether Uber had ever done any audits looking into the accuracy of its verification system.

Alex Marshall, president of the Independent Workers Union of Great Britain (IGWB), said the facial identification system was “definitely indirect racism,” and that drivers were being forced into poverty and homelessness after being prevented from using the app.

The IWGB said it was fighting to reverse dismissals on a case-by-case basis and pushing for an Early Day Motion to be filed in parliament.

Last year Uber drivers in the UK and Portugal launched a legal process in Amsterdam against Uber over sackings by algorithm without the right to appeal. And in February the UK’s Supreme Court ruled that Uber drivers were workers and should have various benefits including the minimum wage, holiday pay and pension entitlement.

Latest HR job opportunities on Personnel Today


Browse more human resources jobs

[ad_2]

Source link

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *