Over 25 industry professionals attended day two of open courses at Simula Research Laboratory dedicated to topics in software maintenance and evolution.
Magne Jørgensen, professor in software engineering, spoke in the first half of the day. His talk was about evidence-based software engineering. The principal learning goal of his presentation was to incite software professionals to be more evidence-baseds in their choices and decisions. Magne discussed a step-by-step approach to make better choices based on collecting evidence from sources such as Google scholar, practice-based evidence and own data collection. He also warned us of the pitfalls in the approach and selection of evidence. Magne illustrated how easily professionals can be manipulated by popular but inaccurate reports and claims that lead to very high cost overruns in software projects. The talk also gave psychological insight into reasons for our choices. His presentations stirred the inner beliefs of many participants which led to very vivid discussion in the classroom. The lecture was very refreshing to all of us and we look forward to better evidence-based human judgement in our daily lives including when we have some back pain.
In the second half of the day, Leon Moonen took off where Magne Jorgensen left us. His talk was about Software Analytics. Leon, an internationally known expert in the domain of software evolution, discussed how Software Analytics can be used to support software engineering decisions by collecting real-time evidence about ongoing system development, quality and user experience via data extraction from numerous sources such as development artifacts, bug reports, user experience surveys, and logs to name a few. Leon also organized an interesting “card sorting & concept mapping” activity where he asked the participants to come up with three software engineering questions that they would like data scientists to answer using software analytics. In addition, the participants were asked to write two questions that they considered inappropriate for data scientists to address. Next, the participants gathered around a large table with all questions and were asked to discuss and cluster them into related categories, and to come up with names for the groups.
Numerous questions with regard to improving software processes and testing came up in the final clusters. We also saw inappropriate questions such as how much bugs does this developer introduce, how productive are individual team members, and other privacy related aspects that people did not want data scientists to analyze. The day ended with long discussion on tools for software analytics demonstrating the avid interest in the participants to make measurements and unlock real-time answers in the big data produced during the software engineering life-cycle at their respective companies.