Skip to main content

Spring Colloquium on Probability and Finance

University of Padova, Department of Mathematics "Tullio Levi-Civita"

28 May 2019, 10.30 - 13.00, room 2BC/60

The Spring Colloquium on Probability and Finance brings together three leading researchers, who will present recent works at the intersection between stochastic analysis, mathematical finance, insurance, and machine learning. This colloquium is part of the research project Information in Financial Markets: A Probabilistic Perspective, funded by the Europlace Institute of Finance, an initiative of the Louis Bachelier Institute.

Organisers: Giorgia Callegaro and Claudio Fontana

Contact e-mail: fontana(at)math.unipd.it


SPEAKERS:

  • Monique JEANBLANC (University of Evry, France)

  • Ying JIAO (University of Lyon I, France)

  • Martin LARSSON (ETH Zurich, Switzerland)


SCIENTIFIC PROGRAM:

10.00 - 10.30: welcome coffee

10.30 - 11.15: M. Jeanblanc - Characteristics of Default Times

Abstract: In this talk we define some processes associated with random times, as the intensity, the Azéma supermartingale, the dual projections of its indicator process and we show how to construct random times with these characteristics.

11.15 - 12.00: Y. Jiao - Modelling Dependent Lives Mortality in Bivariate Life Insurance

Abstract: We study the joint-life mortality modelling for a couple in life insurance. By considering a simple forward mortality rate model, we show the role of information set on the dependence structure of two spouses. We also discuss the evaluation of joint-life insurance contracts and the broken-heart syndrome.

12.00 - 12.45: M. Larsson - The Expressiveness of Random Dynamical Systems

Abstract: Deep neural networks perform exceedingly well on a variety of learning tasks, in particular in finance where they are quickly gaining importance. Training a deep neural network amounts to optimizing a nonlinear objective over a very large space of parameters. This would seem a hopeless task if an optimal or near-optimal solution were required. The fact that this can succeed suggests that the result is largely insensitive to the details of the optimization procedure, a perspective that is supported by empirical evidence. In this work we take a step toward a theoretical understanding of this phenomenon. In a model of deep neural networks as discretizations of controlled dynamical systems, we rigorously prove that any learning task can be accomplished even if a majority of the parameters are chosen at random.