Justin Brody is an assistant professor of mathematics and computer science, and he runs Goucher’s Laboratory for Computer Cognition. He came to the college in 2012, after three years of teaching at Franklin & Marshall College. Brody has a doctorate in mathematics from the University of Maryland and an expertise in artificial intelligence, but his interests extend to all aspects of the mind—human and otherwise.
Goucher Magazine: So, you spent a whole year in a Buddhist monastery. How did that come about?
Justin Brody: A friend of mine had taken me to a talk about some Tibetan art, and I had been reading a lot of Carl Jung at the time, so it seemed to tie very much into that—at least the way the guy was explaining it—so I was kind of fascinated from the beginning. And that led me into Buddhism. I started meditating in the middle of college, and I just really liked it. Then I got a job that I didn’t love, so I thought, ‘Let me go do what I love instead.’
What did you do all day there?
The abbey had a program where you could take ordination and become a temporary monk. It was sort of half meditation and half studying Buddhist philosophy.
There was an option to become a permanent monk, and that was there in the back of my mind, but I didn’t do it. It’s funny, because there’s nothing fundamentally different about a monastery, you know; at the end of the day, it’s just a place. So I remember thinking, ‘There’s nothing I’m doing here that I can’t just do back in Baltimore.’ But I didn’t really appreciate the amount of support you get there. Even though it’s theoretically possible, it’s tough to just ignore everything and do meditation.
Did it get boring?
No, it never got boring. I loved it. Your mind is always shifting, and it’s always different. Physically it’s always the same thing over and over, but your mind is constantly changing and you’re going into deeper levels. So no, I never got bored. I think you’re supposed to. Maybe I wasn’t doing it properly.
How do you connect that study with your study of artificial intelligence (AI)?
They’re both ways of approaching the human mind, I think, just from very different angles. I’ve started working with this guy down in College Park, and he’s actually approaching the same questions that Buddhists have been wrestling with for thousands of years. He’s starting to approach them using AI models, so there’s this sort of convergence between this phenomenological approach, where you look at your experience of ‘mind,’ and the modern cognitive science approach, where you’re actually trying to see how it works as a system.
The traditional paradigm in cognitive science was that the mind is very much like a computer, and drawing from a lot of Buddhist influences, a hot approach now is something called embodied cognition—where things don’t quite work like computers in a traditional sense. It’s sort of a more organic and situated thing. That, to me, is incredibly exciting because it’s drawing somewhat explicitly on Buddhist ideas, and it’s sort of at the forefront in straight cognitive science, definitely, but also increasingly in robotics and AI as well.
Can you say more about that difference?
In old-fashioned AI, there’s the idea that you could just program everything that’s true about the world into a computer, and it would do its thing and be sort of a brain in a vat and think new thoughts. The other approach is saying that any sort of thing that thinks has to have its own experiences, and it has to make sense of the world for itself in some way. So you can’t really just program a bunch of stuff into it; it really has to grow and develop.
For example, computers are used now to recognize handwriting, so an old-fashioned approach would be very rule-based. It would say—OK, you’ve got this curve on top and this other curve over here, so it must be an ‘A.’ You may have some other rules that say, ‘This is what it takes to be a ‘B.’ But it turned out there’s so much variation to what an ‘A’ looks like and what a ‘B’ looks like that it’s pretty much impossible to come up with an explicit set of rules.
The other way of doing it is to feed a network a bunch of images of ‘A’s.’ Every time it gets it right you say, ‘Great, you’re doing a good job.’ When it gets it wrong, though, you say, ‘No, you got that wrong.’ Then it goes back and modifies itself. The whole game is that through this learning and experience of seeing all these ‘A’s,’ ‘B’s,’ and ‘C’s,’ it can actually learn to recognize individual letters. Then, at this point, it can outperform humans, which is sort of interesting.
What’s your current project?
I have a bunch of students training computers to play Atari videogames. The idea is we feed the computer the images, and it’s supposed to pick out what things are. The innovation is that we’re adding a notion of the computer itself into the loop, and it’ll use that recognition to improve how it’s learning to play the games.