• Question: Have you ever experienced any problems like those experienced by the scientists that couldn’t be seen by the facial recognition?

    Asked by Hugo on 31 Dec 2019.
    • Photo: Diana Kornbrot

      Diana Kornbrot answered on 31 Dec 2019:


      Yes. My iphoto software enables me to tag 1 face, sday Jo, and then its AI software will search for any pics tha have ‘same’ face.

      there are 2 kinds of FREQUENT mistakes
      false alarms: people who are not Jo are tagged as Jo
      misses: Jo is in pics, but nort found by AI software

      When assessing ANY software one needs to know both hit and false alarm rate.
      this goes for facial recogntion, driverless cars noting obstacles, police software looking at climate protesters.

      One also needs to know how many examples were used to test the software. Too few examples and prediction will be cheap, but poor. Economising on software was a major cause of boeing 737max disaster – don’t expect any Boeing excutives to be done for manslaughter.

      Sadly lectures did not explain clearly about misses and false alarms

    • Photo: Gary Munnelly

      Gary Munnelly answered on 1 Jan 2020:


      Personally, no, not yet. But there are loads of known cases of things like this happening.

      The problem is caused by bias in data. Sometimes this bias is accidental but it can have a profound impact what the computer learns.

      Basically, AIs learn how to perform a task by looking at data. If there is a bias in the data, then the AI can learn that bias. For example, if we wanted to train an AI to understand speech, we might give it a bunch of recordings of people talking. But, if all the recordings were male, then then computer might not learn how to understand female voices.

      This is a huge problem which can have some serious negative consequences for people who have to interact with this software.

    • Photo: Maja Popovic

      Maja Popovic answered on 6 Jan 2020:


      Nothing serious, it was a speech recognition system (converting speech sounds into written text by a computer) developed at my university in Aachen (Germany).
      Both male and female voices were used to train the system, which is a proper way to do it. However, only native German speakers participated in training. I myself don’t have a strong foreign accent, but I certainly don’t sound like a native German. Therefore, it always failed at least on one word when I was speaking to it.

    • Photo: Samantha Durbin

      Samantha Durbin answered on 8 Jan 2020:


      As has been discussed above, there are biases in lots of things and it depends how stuff is developed. But it’s not just software – the world around us is biased in different ways, so most of us are affected somehow. I’m quite short, so when things are designed for the average male height I can struggle (and some things are a lot more dangerous for me – like driving, as the airbags etc. in my car have been developed for someone who is much further away from the wheel). You see things like this in some robot design – if they can’t cope with slight deviations form the norm they really struggle to function.

      Another big example is that most of the world around you has also been developed for someone who can walk around easily, including our transport system – if it had been designed in a different way from the beginning, people who use wheelchairs (for example) would be able to get around much more easily and it would just be the norm to not have steps (among other things).

    • Photo: Amy Mason

      Amy Mason answered on 15 Jan 2020:


      Not that I’ve noticed, but I’m sure there is bias in my work that I won’t spot until someone else points it out. I just hope it is small and doesn’t change my conclusions.

      A lot of statistics is about trying to work out what biases could be affecting your model and trying to control it for them. For example, if I assume the noise in my model is random but later I find a relationship between the age of the patient and the noise, that would mean my assumptions are wrong and I would either need to adjust my model to account for age, or to otherwise estimate how much error not taking account of age is causing.

Comments