A couple of days ago, I was listening to a Data Skeptics podcast onf Why machines will never rule the world.
I am not interested in AGI, if it means human level intelligence(HI). However, the quest for HI (AGI) is good. It puts things in perspective on how little we know about intelligence.
– AI, AGI and ASI (general and super) are just Applied Mathematics
– Intelligence will not emerge from complex systems because complex systems are difficult to emulate.
– The authors seem to see a strong correlation between reward system and increasing intelligence
– They think that intelligence is exhibited in our ability to find solutions to unseen problems and the current AI does not do that. What about AlphaGo’s move 34? Just a lucky combination?
– Intelligence is also exhibited when you solve an old problem in new ways
They do discuss many topics that include the drivenness of life and algorithms reward systems, evolution and complexity, and more.