I heard it’s voice back and forth, back and forth. I shut the door and hide in my room waiting until it’s safe. Yes safe. It gets under your feet all the time like there’s no tomorrow. But the carpet is vacuumed. Vacuumed by a bot.
Robots as part of your everyday were the stuff of science fiction. Now part of our everyday world, what will our future world look like? Will they be the villains in our story like The Terminator or The Matrix? Break the 3 Laws of robotics for the good of mankind like Sonny in the movie iRobot, or be part of an amazing future, beyond our wildest dreams?
Let’s get it right
Muller, Vincent C., “Ethics of Artificial Intelligence and Robotics“, The Stanford Encyclopedia of Philosophy focuses on AI and the “genuine problems of ethics. (A place) where we do not readily know what the answers are” (Mueller,2021). It’s a robust 32 page read on my computer.
Here are the 3 main AI areas covered off;
- AI as objects – More like things are now with my vacuum cleaning bot. Robots as machines.
- AI as subjects – This is where Isaac Asimov’s 3 Laws of Robotics would apply.
- AI as superintelligence – AI becoming a singular entity. This is where VIKI in iRobot would apply. Deciding on things based on AI’s wealth of knowledge. Would humanity become redundant?
I’m only doing 6 takeaways for AI as objects as this ties in with my everyday. I’ve bracketed the sections where they’re sourced from. There are whole areas overlooked. Some of the insights made me feel a bit jittery. Some great source material for science fiction writing.
AI as Objects
- Regulations to cover privacy and surveillance are lagging (2.1).
- We have lost ownership of our data (2.1) Do read this. All about data trails. How our raw data is used. (Eg. facial recognition in photos and videos allows identification and profiling)
- Algorithms can be used to manipulate people and nudge them in a particular direction. (2.2) I see this at work now. Infonews gives less news than a print newspaper these days yet infonews is the model TV uses now.
- Data bias (2.4) Quality dependent on the quality of the coding a person puts in…“garbage in, garbage out”. So, if the data has bias the program will reproduce bias. Think of the movie The Minority Report.
- Use a vast amount of energy and are hard to recycle. (2.6)
- Autonomous Weapons exist but cannot conform to International Humanitarian Law – (2.7) Can’t differentiate between civilians and military.
Such an interesting read. This document really opened my mind to where things are at currently. Lots of voices out there trying to make thing go their way. Let’s just hope that we can get it right.
Müller, Vincent C., “Ethics of Artificial Intelligence and Robotics“, The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/>.