Another day, another leak of AI data

AI should be making our lives easier but human error is still resulting in leaks of personal data.

You likely haven't given much thought as to what happens in the background as you use services online but it's actually quite important. As you sign up to more online services that offer easy, cheap or even free access to AI products, you are often agreeing to let that website store and have access to your data so they can use it in the future. Whether that is advertising, to train future AI or for something else is up to them, not up to you.

If this service provider does not take proper measures, this data, your data, can be accessed without your permission.

Unfortunately Rabbit and their R1 - a small, $200 orange device that sold out shortly after launching - is the latest company to let personal data get accessed by un-authorized people.

In a recent posting from "rabbitude", a team focused on finding flaws in the r1, they revealed that they have managed to find ways to:

  • read every response every r1 has ever given, including ones containing personal information
  • brick (break) all r1s
  • alter the responses of all r1s
  • replace every r1s voice

This is obviously bad news for any rabbit r1 customer and to make it worse, apparently there is internal confirmation that the rabbit team are aware of the issue and have chosen to ignore it. While it's easy to claim that they were "hacked" and there was nothing they could do, a lot of this boils down to human error and poor security practices unfortunately.

Hopefully nobody shared too much personal information to this service but it's easy to forget that any information you do share will likely be stored away for future use, or to be leaked. Ensuring your data is stored encrypted is a simple step to know you are the only one who can access your personal information.