TOR146 ― How AI And Machine Learning Prove The Journey Is More Important Than The Destination With Dr. Kenneth Stanley

Kenneth Stanley

Listen Now


How obsessed are you with goal setting? I mean seriously – if you’ve been living on planet earth you know that, those invested in the science of achievement, have done an incredible job of not only convincing, but actually getting us as a whole to adopt goal setting as a lifestyle. Even if you don’t set goals, you know its something you should be doing. And, full disclosure, I am not only a living, breathing product of this paradigm, but I’m also one of its loudest proponents (at least in my mind). Goal setting has become something of an absolute in the design of products, services and programs in the social sector, as well. Just think of how many times you’ve designed theories of change, conceptual and logical frameworks or SMART indicators. You know what I’m talking about here – the obsession with defining the results that we want to achieve with our work and, perhaps even more importantly, how will measure our progress towards achieving them. While I could hold court on how to improve the way we design programs to achieve social goals for the next 4 days, here’s the thing: this show isn’t about that. In fact we’re about to spend the next hour and change talking about how creating objective goals may be the exact wrong thing to do when trying to achieve something amazing. My guest, and truly the inspiration, for this 146th episode of the Terms of Reference Podcast, is Dr. Kenneth Stanley. He currently works for Uber Labs while on leave from the University of Central Florida where he is an Associate Professor in the Department of Computer Science. Ken focuses on Artificial Intelligence and Machine Learning. I learned about Ken from ISG’s Director, Micheal Klein, when he shared an article with me from the 538 blog titled “Stop Trying To Be Creative,” that was written by Christie Aschwanden. I’ll let you read that on your own, but suffice it to say that I was intrigued and had to know more – and how it might apply to the business of helping others. And, I think that’s exactly what you’ll get in this episode. The revelation in this podcast should, ultimately, blow your mind. We get started by talking about AI and Ken’s pet project (called Picbreeder) and how that led to his discovery that objectively determining what we want to achieve may be the best way to miss out on true awesomeness. Get Kenneth’s book, coauthored by by Stanley’s longtime collaborator Joel Lehman: Why Greatness Cannot Be Planned: The Myth of the Objective You can connect with Kenneth here: http://www.cs.ucf.edu/~kstanley/

IN TOR 146 YOU’LL LEARN ABOUT

  • An intricate look at artificial intelligence from Kenneth’s vantage point, interested in replicating the 500 million year process of human brain evolution
  • Kenneth’s long arc across evolutionary computation until now, when the next step of the evolution begs us to reconsider what computation should be tasked to do
  • The fallacy of the objective: why closed-ended learning may be limiting our systems, including aid and development’s
  • The benefits of “small data” machine learning approaches
  • Just how much our inherent bias pervade our systems, technology and innovation thinking

OUR CONVERSATION FEATURES THE FOLLOWING

Names

Topics

Places

 

EPISODE CRIB NOTES

Greatness, not 0:04:29,830
  • this is a really kind of unexpected development that I didn’t set out to get into
  • I was originally just interested in artificial intelligence, machine learning
  • particularly the brain
  • I was interested in the idea of modeling and simulating brain like things inside of computers
  • neural evolution: setting up a kind of Darwinian-like process in a computer that causes artificial brains to evolve in the computer
  05:14,900
  • learning artificial neural networks are very popular so this may be somewhat familiar to people who’ve heard deep learning
  • those are actually artificial neural networks
  • this is something I’ve been working in for a long time but connected to the idea of evolution
  • for many years I was doing without any kind of ambition about writing a book against having an objective
  But what is learning? 05:56,150
  • historically there was there has been interest in artificial neural networks for decades
  • artificial brain, something that’s sort of like how a brain is hooked up in your head inside of the computer program
  • neurons are connected, neurons send signals to each other
  • maybe you could hook up these artificial neurons in a way that it’s sort of analogous to the way that real neurons are hooked up in real brains
  • maybe they can do things a little bit like what brains actually do
  training algorithms
  • you’d be exposing the network to tasks or problems or data and it would learn from that data and get better at doing something
  • for example it might get better at recognizing handwriting
  • you could show it images of letters
  • when a neural network makes a mistake an error signal is sent through the network that causes
  • it to update in some way such that there will be reduced in the future
  • it gets kind of smarter and smarter over time
  • this is called the technical term of it is Stochastic Gradient Descent (SGD)
  • it’s difficult to know how to hook up these neurons with each other
  • what is the best configuration or architecture of a brain?
  • the architecture that is most supportive of the ability to learn intelligently and efficiently
  How do artificial intelligence look like in practice? 08:47,960
  • A neuron takes signals in and then it does something to the signals
  • then it outputs its own signal
  • that’s basically a program
  • it tends to be a very simple program
  • neurons at least as I’ve described in these artificial neural networks tend to be very simple, small
  • the complexity of what a brain does is emerging from the coordinated activity of many, many neurons connected to each other
  • any one neuron is a very simple, uninteresting device
  • sometimes they call the work connectionism
  • information flows over those connections
  • by flow I mean that the program actually simulates information moving from one neuron to another over the connection between
  • it’s basically a big simulation of neurons sending signals to each other
  • the problem is that we don’t know how should they be connected such that they’ll do something “intelligent”
  • just connecting them to each other doesn’t automatically make it intelligence
  Could you give us an example of actual emergent intelligence? 10:50,199
  • these things were trained with an objective
  • sometimes people even call it the “objective function”
  • in introductory neural network courses they are trying to learn to recognize handwriting
  • so you might say your objective in this case is to as accurately as possible identify the digit that corresponds to the particular image
  • so we feed in written digits images of written digits that a human wrote on a piece of paper
  • the output of the neural network is a number of 0 through 9
  • if it errs, that’s wrong, and so it is not achieving its objective
  • we would then record that as an error and we would use the fact that it was wrong to change its internal structure in some way
  • if you expose it to enough examples like this over time it will gradually get better from all of the mistakes it’s going to make
  • it will be almost as good as say human reading digits off a page
  What’s the endgame? Well Picbreeder, of course 14:24,680
  • there is a distinction between deep learning and narrow evolution
  • deep learning is as pop culture as you possibly can get
  • I had a different interest, on a more meta level
  • how did the brain get there in the first place?
  • not how does it learn during his lifetime but rather how did it evolve
  • the brains of our ancestors evolved to become increasingly complex and sophisticated until they were human level brains
  • a process that went on for millions of years that eventually ended up with a human level brain
  • I was interested in whether a process like that could somehow be recapitulated in a computer
  • you could actually get brains to evolve through some kind of evolution-like process
  • it is really like breeding and then
  • we would say here’s ten neural networks, let’s have them all try to do something
  • so we’re literally going to breed them, like we would breed dogs or horses
  • then they’ll have baby brains which then do the same task
  • the ones that are better will again be allowed to have children and those will be their offspring and that’s the next generation
  • and so forth
  • this is sometimes called an evolutionary algorithm
  • what I was interested in was evolution algorithms for evolving brains, “neuro evolution”
  • it’s a bit different from the conventional deep learning
  • it’s sort of amazing that this process that has no engineering, no guidance
  • we could have computers building stuff for us that we don’t even know how to build ourselves
  The next big thing: ‘tiny’ computation 20:13,130
  • deep learning is basically big computation
  • but perhaps rather than looking at giant data sets, you give it a small a small task see
  • we’re looking at the ability to really do impressive things with very few iterations which is kind of the opposite of deep learning
  • your evolution can be data-driven as well as evolutionary
  Picbreeder 21:20,270
  • great things happening with very little input
  • the road that ended with the book, which was sort of anti-objective
  • so I’m sitting there trying to figure how to evolve brains…
  • I created an algorithm, neural evolution of augmenting topologies
  • NEAT
  • the algorithm could evolve artificial neural networks into artificial brains, get more complex
  • that’s what the word augmenting means
  • in nature, presumably, brains got more complex over time
  • people started experimenting with it on their own
  • one of the things among many of the people were doing with NEAT
  • an idea that something I hadn’t thought was to ask the neural networks to draw a picture
  • the idea was that a neural network can output anything you wanted out
  • but the goal output was true or false
  • it can also help with something like a picture or even a song
  • before Picbreeder, hobbyists just playing around
  • genetic art, or evolutionary art
  • each image is generated by a different neural network
  • picture breeding program
  • I was able to breed things inside of them that I never thought that you would be able to so easily
  • they looked great, they’re just amazingly detailed
  • I was just kind of fascinated with that
  • there’s all kinds of implications of this. I still don’t know what they are but it’s just surprising that you can do this
  • it’s a small data thing because it would only have to make like 50 different selection steps and I would get a spaceship
  • compared to millions of iterations of modern algorithms
  • looking at some of the pictures on the fivethirtyeight blog
  • “this would be a really cool online service”
  • people could just go on the web and breed pictures
  • we would be able to crowdsource the whole world and what would happen would be that we would get to see human beings putting in their efforts converted into a kind of a survey of what’s possible in this kind of world of neural network generated imagery
  • it led to Picbreeder
  • I was very shocked that I got a car. Never thought I would get one. I realized that I had not been trying to get a car. I actually started with an alien face
  • I was taught a way that you get things in the field of artificial intelligence and machine learning and engineering
  • in the whole educational system and scientific culture, you set your objectives and then gauge your progress
  • if you’re not moving toward your objectives, you basically are slacking off
  You decide what is worth from all this 29:21,619
  • I just kept obsessing about this for a while
  • one of the things we did when I start thinking about this with my group was we started to look at the history of other discoveries
  • I discovered a car, could somebody had discovered a skull and a butterfly?
  • innumerable interesting discoveries, and in every single case every interesting case that we had on there it was the same story
  • none of them were the objective of the person who actually found them
  • we can see what the stepping stones that they traversed to get there
  • it’s inconceivable they could have possibly been purposefully moving towards to what they ultimately ended up at
  • just about none of the discoveries on the site were discovered as an objective
  • we think we know how discovery, creation, innovation work
  • I started thinking about things like you know if that was doing machine learning like trying to train like a car to drive around the track like typically what I would have is some kind of objective function
  • but then I thought there’s another way of thinking about this, not objective at all
  • if you just keep pressing something to do something new, it has to get sophisticated in some way
  • it can’t stay simple
  • novelty search was inspired by this insight
  • is an algorithm that will just try to get neural networks to keep on generating novel behaviors with no final objective
  • navigating robots actually worked better than if we did the conventional thing of rewarding the robot for getting closer to the end of the maze
  • it’s not just a matter of asking questions about something versus trying to solve it
  • it’s really even beyond that
  • if you have a willingness to explore and to follow your intuitions say “this is interesting” and don’t try to justify it
  • it may actually lead to something interesting in something that you wouldn’t have discovered it otherwise
  • we need room to be able to explore in this manner
  • humans are very good at identifying what’s interesting
  • we do not allow people to do that institutionally
  • we only allow people to do things that they justify objectively
  • which means that all those stepping stones are being cut off
  A new way to do things (if ‘doing things’ is your thing, of course) 40:46,440
  • I started getting invited to give talks a lot of different places
  • it was interesting to the machine learning community, the evolutionary computation community
  • I started to notice that a lot of the questions I was getting at these talks were not about machine learning
  • the more I got asked these questions the more I started to integrate that kind of discussion into these talks
  • eventually I got invited to talk to you the Rhode Island School of Design
  • I was much more willing to play with the personal implications
  • the response that I got was so emotional
  • this resonated so much with people
  • there were cathartic sessions, almost like therapy
  • “for the first time I realized after your talk that I don’t have to actually have an explanation for what is my objective”
  • so this made a lot of these kids feel a lot better about what they were doing
  • I suddenly realized it does need to be a book
  • this is speaking to a very large audience
  • it’s not just about machine learning
  • this is speaking to how people approach their lives and everyday struggle
  • and how society is approaching many of its biggest problems which almost always are objectively driven even though now this exposes this huge flaw in objective thinking
  • in the field of search in machine learning you’re searching for anything and it was recognized
  • people know about local optima and things like that
  Extrapolating the evolutionary computing-online collaborative community experience 57:18,950
  • try to replicate what we learned from the picbreeder experience
  • in the context of education, “maybe”
  • education is one of the world’s greatest problems
  • “I think it because I thought about a lot of might inspire other people to think of more unique ways of doing things”
  • at least in the US like we like to do things based on tests
  • the system really should be transformed into a treasure-hunting system
  • because education must be one of the most complex problems that we face in our society
  • we can still have oversight in the system, basically something like peer review
  • I guess you would have an NGO and say here’s what I want to do, give me some money

 

Please share and participate

If you have any questions you’d like to ask me or Ken directly, head on over to the Ask Stephen section. Don’t be shy! Every question is important and I answer every single one. And, if you truly enjoyed this episode and want to make sure others know about it, please share it now:
[feather_share show=”facebook, twitter, linkedin, google_plus” hide=”reddit, pinterest, tumblr, mail”]
Also, ratings and reviews on iTunes are very helpful. Please take a moment to leave an honest review for The TOR Podcast!

Love this show? Tell us about why (or why not) below:

Share the Post: