Many large organisations maintains a large pool of trained human resources. When a new task arrives, the management constructs a team by selecting appropriate team members with different skills and arranges an effective operational structure for the team. In machine learning (ML), the model developers typically train many models for each individual task, then select the best model to perform the task, while discarding the unselected models. Considering that keeping a trained ML model costs much less than employing a person, there is a huge waste of model resources. The main reasons behind this wasteful practice include (i) the lack of effective means for apprehending the "skill profiles" a large number of ML models; (ii) the lack of effective means for constructing a "team" such that the combined skillset of the team is suitable for the task but each component model does not have all the skills required; and (iii) the lack of effective means for enabling human decision makers to utilise imperfect ML models as assistants or advisers. Because of these reasons, there is less incentive to maintain a large pool of trained ML models that may not be the best for a specific task individually, and the emphasis has been placed on training a "star" model as optimal as possible for each arrival task.
The technology of visualization and visual analytics (VIS) can address the aforementioned three "lacks". In many data-intensive applications, VIS can enable decision-makers to observe a large amount of data quickly (e.g., stock market), analyse complex relationships among different data entities (e.g., social network analysis), and make complex judgement based on multiple and sometimes conflicting machine-predictions (e.g., by different epidemiological models). The latest theoretical advance offers an explanation as to what visualization offers users that statistics and algorithms cannot offer. Humans have limited cognitive bandwidth for receiving and reasoning about information. To reduce the amount of information received by humans, statistics and algorithms typically transform a large amount of data to a few variables (e.g., mean and standard deviation) at a higher precision, while visualization presents many more variables at a lower precision (e.g., a line plot of 500 data points in a time series). Because humans can perceive many variables visually at a very low cognitive cost, more cognitive bandwidth can be directed to data-informed reasoning. This explains why financial experts make decisions rarely based only on one or two financial indicators, but also need to observe time series data. Visual analytics is a branch of VIS focusing on combined uses of statistics, algorithms, visualization, and interaction in human decision workflows.
In this project, we will develop a new technology to enable human decision makers to benefit from VIS capabilities in their workflows. We address the first aforementioned "lack" by designing and developing a novel VIS-enabled infrastructure where hundreds and thousands of ML models can be stored with their provenance, be tested and profiled automatically and routinely, and be managed as trained model resources by ML model-developers with the aid of VIS capabilities. We address the second "lack" by providing ML model-developers with a VIS-enabled tool for constructing ensemble models (i.e., teams of ML models) by selecting appropriate component models (i.e., team members) from a pool of model resources, and determine an appropriate ensemble strategy (i.e., team structure). Last but not least, we address the third "lack" by providing ML model users (i.e., decision makers who receive low-level predictions or recommendations from ML models) with the VIS capabilities, which allow them to observe quickly the anomalies and conflicts in the low-level predictions made by different models, and when it is helpful, to scrutinise the profile and provenance of these models.
|