Fran Buontempo is the author of Genetic Algorithms and Machine Learning for Programmers, published in 2019 by The Pragmatic Programmer. She is the editor of ACCU’s Overload magazine. She has published articles and given talks centered on technology and machine learning. With a PhD in data mining, she has been programming professionally since the 1990s. During her career as a programmer, she has championed unit testing, mentored newer developers, deleted quite a bit of code and fixed a variety of bugs.
Q1. Your book on Genetics Algorithms and Machine Learning was published earlier this year. What led you to write the book and what does it cover?
Once upon a time, I interviewed a candidate with a colleague, and he said, “They couldn’t even program their way out of a paper bag!” I thought that was a tad unfair, and started to consider how you could demonstrate you can code your way out of a paper bag, wet or otherwise. At the time I was trying to find toy problems to practice a few things I learnt during my PhD and try other machine learning techniques. In particular, I wanted to try out ant colony optimisations (ACO). These use artificial pheromones to guide “ants” round an environment. This means with the right fitness function, I could make an ACO trail some ants out of a paper bag. Surely this was the proof the world needed that I can code. I gave a talk at the 2013 ACCU conference, and people their signed a certificate for me.
The talk went down well, and I thought up several other ways to code your way out of a paper bag. I used genetic algorithms to find the angle and velocity to fire a cannon ball at, so that it would start inside the paper bag and fly over the edge. I tried a Monte Carlo simulation, first of Brownian motion, making green blobs diffuse from the centre of a bag spreading out over time, then of stock prices, starting on the left of the bag and tending to go up over time.
I added to these over time, including some cellular automaton, such as Conway’s game of life, and particle swarms and abstract bee colonies. I also considered how to get into the bag in the first place, and used hill climbing and simulated annealing (doing something random once in a while) to make a Python turtle or two climb down into a bag. I wondered if I had enough material for a book by this point. Several friends encouraged me to give it a shot. These are all in the book [Genetic Algorithms and Machine Learning for Programmers]. They give straightforward, visual ways of seeing how an algorithm “learns” (or improves, at least) over time. Some of the earlier talks are on YouTube (search for ‘frances buontempo’).
Q2. What’s different about the genetics algorithms and machine learning of today (2019) compared to what was available say 20 years ago? I remember being fascinated by neural networks around 1998 but I imagine many things have moved on since then!
I took a break from work in the 90s to go back to University to do a PhD, full of excitement about AI and neural networks, partially fuelled by a keen imagination and a lot of SciFi. I was mildly disappointed to discover a lot of the AI/machine learning/data mining involved coding up a “for” loop, that gradually sought out a target, such as a maximum or minimum over time. Many algorithms started somewhere picked at random. This niggled a bit, because my background is mathematics. However, it became apparent that you can solve problems this way. Or at least explore a space of possible values/outcomes. Back then, you tended to have to hand code much of this yourself, but that gave me a solid grounding in what happens under the hood. Today, a lot of people pick up a framework, often in Python, and plug in some parameters to see what happens. I have a concern a lot of people are building production systems like this, without enough testing or validation and the systems that get built can be very fragile. Here’s a thing. Machine learning code is code - you should still test it, find a sane pipeline for deploying it, adding new features and so on.
Another difference is the speed of machines. Back in the 1990s I left a neural network chugging overnight. The same thing would take just minutes now. This makes the algorithms more accessible. It also makes more things possible. I suspect new types of neural networks, including those used by computer vision (labelling cat pictures most of the time), and natural language processing (chat-bots usually), have come about because of the faster hardware with way more RAM.
I don’t think genetic algorithms have changed much since the 1990s. However, there seems to be more interest in them, or at least in the wider “evolutionary programming” area. I gave a talk about how to generate the code for Fizz Buzz with Chris Simons at ACCU’s conference this year. Will AI make programmers unemployed? Well, you can use genetic programming, a variant on genetic algorithms that generates trees (think abstract syntax tree (AST)) instead of a list of numbers or letters. Once you have an AST, you have a program. Oh my, you should see the code. However, it only works if you write the tests first. The AI then generates code to pass your tests. If we do end up being made redundant because of this, the customer had better get really good at specifying tests up front.
Q3. How can we make AI and ML software testable? Where should we focus testing efforts with genetic algorithms?
How do you make any software testable? Before I answer, I just want to mention you can borrow from some AI/ML ideas in order to test your own code well. I see fuzzers, property based testing and mutation testing as important tools, when appropriate. Genetic algorithms breed new solutions to problems, but mutate these periodically, borrowing from Darwinian evolution. Mutation testing does this second part - it mutates your code. If your tests still pass, it’s either automatically refactored for you, or your tests aren’t good enough. Each of these approaches do something random. This can be a problem for people trying to test AI/ML software. It often does something random, so how do you test it? Don’t bother recording specific numerical output for specific inputs with a known random seed. I mean, you can, but what does this really test? If you expect something to tend to go up in value, check the trend. If you add a random amount at each step, mock that out and add zero. For my stock price simulations, I tests a few properties - starting with a value of zero, no matter what the interest rate, your stock remains at zero. I try to find edge cases I can work out with a pen and paper for any AI or mathematical code. Setting parameters to zero or one often helps. Check it does what you think. And check empty data sets, otherwise you hard-core AI engine might crash a production run.
Keep your functions shorter, so you can swap bits out for mocks, or better implementations over time. Make sure test failure messages are clear. Write your tests first, and get the machine to generate code that passes them. Put your code in version control, run CI and have a think about how to deploy it as well. AI is no excuse for not engaging a bit of common sense (and good practices!).
Q4. You edit Overload magazine. What are the trends you’re seeing in topics that programmers are writing about?
A very good question. I made a graphs of trends shortly after I became the editor [at Overload], and at the time people were talking about testing a lot. I should try this again and see where we’re at now.
Recently, the changes happening in C++ have generated quite a few articles about new language features. We used to get a few C# and Java articles, but I’ve not seen a Java one for a while. Please do get in touch if you want to write about Java - people in ACCU are interested. Periodically, someone writes a Python article. Andy Balaam wrote a mini-series on how to write a programming language.
There’s an article in the next (Aug) edition about logging, which is a topic that crops up from time to time. I’d like to see more about the whole software environment, including testing, team work, knowledge sharing, deploying code, and so on. And any new languages or language features.
We get a variety of submissions. We do turn down a few. It is an honour to encourage people. Some are famous authors and speakers already, but some people have never been published before. I strongly believe everyone has something interesting to say. If you need a spot of help with structuring, spelling, or which details to include we can help. I do a first pass on submissions, and when they are in good shape they go to the review team. The peer review process means academics can count these towards their publication requirements. It also means the final articles tend to be a bit more polished than, say, a blog chucked together in half an hour. I always spot at least one typo after we’ve gone to print though. No matter how much you test, something can always sneak through.