Curiosity killed AI

Astha Upadhyay
3 min readJan 3, 2022

--

While exploring O’Reilly, I came across the book “AI and the Law” by Karen Kilroy. Specific points made by the author piqued my interest.

My iPhone uses Apple’s face tracking, and it is one of the AI conveniences making me wonder if my picture turns up somewhere on the Internet?

For any AI system, there are some essential dependencies. Let’s take a look.

Dependency 1: Data
Data is the lifeblood of AI. Without it, AI algorithms are useless.
Ever wondered what kind of data is used to train the systems that power today’s world ?

Well, let’s take a dive into them.

Photo by Markus Spiske on Unsplash

Dirty data refers to inaccurate, incomplete or inconsistent, especially in a computer system or database. Well, the year 2020 brought the recent evidence we see that in the case of AI systems that police us, dirty data and faulty algorithms can ruin lives, and our reliance on AI puts our liberty in great jeopardy.

Tech-washed racial bias followed by Faulty search data.

Your system is only as good as the data you use to train it.

Dependency 2: Algorithms

We’re all familiar with the term “algorithm.” Do we not? The author mentions Persuasive AI while discussing various algorithms that power advanced AI systems. I looked it up because I wasn’t sure what it meant.

It means that there is a science that studies what makes us want to do certain things, and when applied to predictive computer algorithms, it becomes persuasive AI. Persuasive technology is used in social media to predict and persuade us to take the most profitable action. All it needs to model us are the right algorithms and data. And, thanks to social media, we’ve willingly provided massive amounts of personal data that can be used to create our digital likeness.

Photo by Alexander Shatov on Unsplash

Garbage in, garbage out is the first fundamental of computing and is a concept that every coder is familiar with. This book highlights that in AI, a discipline that is perceived to be highly complex, this fundamental rule is often forgotten. “While their algorithms and data are sure to produce imperfect results, when have we seen a human being ever make a perfect decision?” asks the author.

The author highlights an interesting case study: Sz Hua Huang v. Tesla Inc, resulting from Tesla’s self-driving feature, Autopilot. So the question arises If a human knows AI can be dangerous and uses it anyway, who is liable for the risk? Therefore, its correct to say that outcome is a product of intent. The intent of the software industry needs to be examined in order to produce a better outcome.

AI is critical, but it is also crucial to develop frameworks for responsible innovation.

Photo by Andy Kelly on Unsplash

So how can we innovate responsibly?

  • Use AI only as needed.
  • Require governance for AI
  • Design Better User Interfaces
  • Usability for Vulnerable Populations First
  • Regulate AI bias

Bias that is allowed to remain in AI models will grow with the system. One widely publicized case of biased AI was Microsoft’s chatbot, Tay. Within 24 hours of Tay’s debut on Twitter, Tay learned to make racist, sexist, and lewd remarks from other users’ tweets. In response, Microsoft had to shut Tay down.

The fact is that just about anyone can build AI. We’d find it immensely alarming if anyone could easily gather the equipment, materials, and knowledge to build a nuclear bomb.

However, we don’t seem to be giving it much mind at all with AI. Why ?

--

--