Approximating Stochastic Sliced Wasserstein Kernels in Optimal Transport
Speaker: Abdelkader Baggag
Many learning applications involve data in the form of a point cloud, a histogram, or a probability distribution – rather than a single point. This type of data can be encapsulated as a probability measure over some space. Learning from probability measures, however, requires specially-tailored representations and algorithms. A promising means of processing, comparing, and learning from distributional data involves optimal transport. OT has become a popular tool in machine learning and statistics; enabling a broad array of applications.
Despite recent algorithmic developments, optimal transport distances remain expensive to compute. One exception is the one-dimensional case, for which transport distances are easily computed. Therefore, the sliced Wasserstein distance is proposed as an approximation to the full transport problem. Beyond its value as an approximate transport distance, it provides a kernel, and can be used in classical kernel-based learning methods, from classification using support vector machines to principal component analysis. This property does not hold for the true Wasserstein distance. In this paper, we demonstrate unbiased estimation of the sliced Wasserstein kernel as well as practical improvements on pipelines for point cloud classification using SVM rather than deep learning machinery.
Explainable Checking for Property and Statistical Claims
Speaker: Paolo Papotti
One challenge in fact checking is the ability to improve the transparency of the decision. We present two checking methods that use reference data to assess claims and explain their decisions.
For property claims, we use the rich semantics in knowledge graphs (KGs) to verify claims and produce interpretable explanations. As information in a KG is inevitably incomplete, we rely on logical rule discovery and on Web text mining to gather the evidence to assess claims. Uncertain rules and facts are turned into logical programs and the checking task is modeled as an inference problem in a probabilistic extension of answer set programs. For statistical claims, we reduce the checking effort by supporting humans in translating text claims into SQL queries on relational databases. We exploit text classifiers to propose validation queries to the users and rely on tentative execution of query candidates to narrow down the set of alternatives. The verification process is controlled by a cost-based optimizer that considers expected verification overheads as well as the expected claim utility as training samples for the classifiers.
Experiments show that our methods enable the efficient and effective labeling of claims with interpretable explanations, both in simulations and in real world user studies.
3D CNNs for Computational Fluid Dynamics Simulation and Reverse Architectural Design Workflow
Speaker: Josef Musil
Architectural design depends on environmental constraints from its initial stage, when buildings and cities take shape and informed decisions on sustainable development are especially important. However, proposals can change rapidly and it is challenging to provide relevant simulations at the same pace. Especially, Computational Fluid Dynamics (CFD) requires complex geometry preparation and computationally demanding solutions. This is time consuming and in contradiction to the speed of design progress. To facilitate the impact of CFD on design, the work presented focuses on data-driven flow field prediction and leverages the approximation using CNNs. This work-flow is then reversed and offers a designer a tool that generates building volumes based on target wind flow.