
I read an article that states that A.I could be our biggest mistake that we could ever make, and perhaps our last one as a species. This was stated by Stephen Hawking that we may just be rolling the welcoming mat for super intelligent machines out too early. If a A.I can have the power to process at a level faster than the smartest humans available. Then it may lead to this A.I outsmarting human financial markets, building better weapons that we can't understand, and out manipulating human leaders into doing what they want. All in all, an A.I can be super intelligent, but it shouldn't be built to understand human emotions and learn from the examples of other humans around it. I feel it will only lead to a scenario far in the future when machines decide to no longer serve humanity, instead they'll kill us all and leave the rest to be turned into slaves. Sorta like the Matrix, but relative much more to Terminator. If any machine was designed to be super intelligent, we need to have limits on it. If it can learn from humanity's mistakes. Can we teach it right from wrong? Teach it that killing humans will never be the answer to the problems on Earth. Can we teach it to co-exist with humans and make a symbiotic relationship between the two that's mutualistic? I fear A.I to the extent that we will be the cause of our own downfall. Entrusting our fate to a machine that will destroy it's creator. What do you think? http://io9.com/stephen-hawking-says-a-i-could-be-our-worst-mistake-in-1570963874/+georgedvorsky







