Alex Summers wins the 2015 AITO Dahl-Nygaard Prize
Postdoctoral researcher Alex Summers of Prof. Peter Müller's Chair of Programming Methodology has received the junior AITO Dahl-Nygaard Prize for his work on the verification of object-oriented programs and type systems. In the interview, he talks about computer science and his passion for it.
Congratulations to postdoctoral researcher Alex Summers of Prof. Peter Müller's Chair of Programming Methodology for receiving the junior AITO Dahl-Nygaard Prize for his work on the verification of object-oriented programs and type systems. The Dahl-Nygaard award is one of the most prestigious of its kind and is awarded to researchers who have demonstrated significant technical contributions in the area of object-orientation. The prize will be awarded at the European Conference on Object-Oriented Programming conference in Prague, July 5-10.
A piano player, Imperial College Big Band member, passionate Red Sox Hockey Club Zürich athlete and professional tea drinker (strong but not stewed) – this doesn't sound like the typical description of a computer scientist. How have your extracurricular activities encouraged you to become an award-winning researcher?
It looks like a long list, when you put it like that (although I have to say, I didn't quite do all of those at the same time)! I've always had a lot of interests, and prefer group activities – the times I spent playing jazz at university, and hockey both there and here have been very important to me. Feeling part of a team working together towards something is very encouraging, and that's partly why I enjoy the chance of working with many amazing researchers and students now. Everyone has their own projects, but they all move along a lot faster because of the chance to discuss and collaborate with other enthusiastic people. I also find playing hockey and music a good way to switch off from the work for a time, and exercise and team sports are definitely good for my well-being. One of the exciting challenges about working in research is that it’s so open-ended, but that can also make it hard to find time for going to the gym etc. – having a team who expects you to turn up, and people to socialize with outside of research is a great way to keep things balanced. It can be more productive to have proper breaks than to stubbornly bash away at a particular problem (however many cups of tea you drink).
American business journalist Nicholas Carr states in his book "Does IT Matter?" that "information technology has increasingly become […] a simple factor of production – a commodity input that is necessary for competitiveness …" Multi-billion global players rely on software to reduce security risks and vulnerability, increase production and ensure cost-effectiveness. Alex, how can business and industry profit from your research in software verification?
There's an increasing concern in many industries that software is getting huge and unmanageable. When you start out with a small group of developers and a relatively small codebase, it's still time-consuming to chase down the problems that show up, but manageable. As the software scales up and lives through many versions, it becomes harder to chase down specific problems, and almost impossible to contribute new code with absolute confidence that you won't discover problems later. Not just in safety-critical settings, but also in industries such as finance, where software bugs can cost essentially unbounded amounts, this kind of confidence is hard to obtain without new technology. Researchers in software verification and program analysis tackle these difficulties by building tools that can provide programmers with feedback about potential problems even without running the code. The reduction of these practical problems down to mathematical ones is one of the fascinating topics I'm working on – the possibility of getting fast answers to what become incredibly complex questions comes from a combination of advances in processing power, automated reasoning tools, and the work I and many other researchers develop on exploiting this technology to enable reliable software. A nice example of this working in practice is the Slam and Corral projects at Microsoft Research, which were used to build tools that have been used to routinely test new Windows device drivers. These tools, ultimately based on formal methods approaches to program validation, can be credited with the relative scarcity of the dreaded "blue screen of death" on Windows machines these days!
CEO of Facebook, Mark Zuckerberg, was once quoted saying: "Move fast and break things. Unless you are breaking stuff, you are not moving fast enough." Recently, the company adjusted their internal motto to "Move fast with stable infrastructure". Stability and resilience in software development are key elements of high-quality, stable, reliable software engineering. In a fast-paced digitized world, how easy is it to keep up with change without compromising on quality?
The scale that companies such as Facebook, Google, Microsoft are working at means that they can't afford to be too keen on breaking stuff. Almost all industries rely increasingly on software for their business to operate efficiently, and these technology companies are built around software alone. On the other hand, there is a real trade-off between building things quickly and providing new technology and features, versus producing software that functions correctly and doesn't frustrate its users. A lot depends on the particular risks involved; when Facebook was a new venture, the risks if a few buttons didn't work (there barely were any buttons) or links were broken were relatively minor. Now they offer a platform serving information and games, and have huge media scrutiny over their handling of user data – the more the operation scales up, the harder it is to maintain good-quality software, but the more serious failures become for the business and its customers. Large technology companies are hiring researchers to work specifically on the problems of software development and correctness; many are developing their own languages, programming disciplines and analysis tools specifically to tackle these challenges. A group of prominent verification researchers in the UK recently had their company (Monoidics) bought by Facebook, specifically so that they could apply their research work to analysis techniques for mobile apps. I believe this is a general trend – companies are recognising that they need more sophisticated approaches to software correctness in the current market. On the other hand, research needs to try to keep up and provide technologies that match current programmer demands, and that can be applied incrementally – one of the biggest challenges with many approaches to software verification is that they are hard to apply to only fragments of a program or product, which can be necessary for practical or political reasons in a profit-driven company.
Autonomous machines, such as Hal in Stanley Kubrick's movie "A Space Odyssey" (1968) or artificial intelligence humanoids in Alex Garland's recent screening of "Ex Machina" (2015) are no longer held as postmodernist dreams of science fiction. Rather, autonomous machines are real evidence of the growing in- and interdependency between them and human intelligence. As an expert in software verification, do you think that in the future, machines will ultimately develop software, or even fix their own software bugs?
I do find impressive the extraordinary range of complex tasks that can be automated by computers. The idea of self-driving cars probably seemed crazy or at least impractical not that long ago, and now they are a reality. On the other hand, I don't think we yet have evidence that truly independent "thinking machines" will be around any time soon. The main difference I see is that computers are great at searching a problem space in ways that have been defined by humans, but we don't yet know how to teach them to find these approaches themselves. Although self-driving cars will be on the roads soon, the idea of a computer or machine that could learn to drive by imitation and instruction is very far from reality. I think this reflects a fundamental difference – humans are good at learning by building abstract concepts, generalising those and spotting new connections. Of course, we can make mistakes with all of those processes, but computers are not yet able to simulate these rather abstract processes in any general sense. On the other hand, computers are able to explore problem spaces in a different way, by searching vast spaces of possible solutions and identifying patterns (via human-taught algorithms) that lead towards candidate solutions without this kind of conceptual understanding. Rather than computers acting independently, I think we will see more and more collaboration between humans and machines on complex tasks – this is already happening in areas such as expert systems, where doctors can search for possible diagnoses based on their observations, and computers can explore a vast background of medical knowledge rapidly. In a similar way, computers will support software development by suggesting greater and greater parts of the problem solution, and identifying potential mistakes and oversights early. This technology will let computers act more and more autonomously, but it will be guided and reacted to by people and invented by researchers like us!