Publication Library

Publication Library

Damped Online Newton Step for Portfolio Selection

Description: Werevisit the classic online portfolio selection problem, where at each round a learner selects a distribution over a set of portfolios to allocate its wealth. It is known that for this problem a logarithmic regret with respect to Cover’s loss is achievable using the Universal Portfolio Selection algorithm, for example. However, all existing algorithms that achieve a logarithmic regret for this problem have per-round time and space complexities that scale polynomially with the total number of rounds, making them impractical. In this paper, we build on the recent work by Haipeng et al. 2018 and present the first practical online portfolio selection algorithm with a logarithmic regret and whose per-round time and space complexities depend only logarithmically on the horizon. Behind our approach are two key technical novelties of independent interest. We f irst show that the Damped Online Newton steps can approximate mirror descent iterates well, even when dealing with time-varying regularizers. Second, we present a new meta-algorithm that achieves an adaptive logarithmic regret (i.e. a logarithmic regret on any sub-interval) for mixable losses.

Created At: 14 December 2024

Updated At: 14 December 2024

The Security of Deep Learning Defences for Medical Imaging

Description: Deep learning has shown great promise in the domain of medical image analysis. Medical professionals and healthcare providers have been adopting the technology to speed up and enhance their work. These systems use deep neural networks (DNN) which are vulnerable to adversarial samples; images with imperceivable changes that can alter the model’s prediction. Researchers have proposed defences which either make a DNN more robust or detect the adversarial samples before they do harm. However, none of these works consider an informed attacker which can adapt to the defence mechanism. We show that an informed attacker can evade five of the current state of the art defences while successfully fooling the victim’s deep learning model, rendering these defences useless. We then suggest better alternatives for securing healthcare DNNs from such attacks: (1) harden the system’s security and (2) use digital signatures.

Created At: 14 December 2024

Updated At: 14 December 2024

Data Representativity for Machine Learning and AI Systems

Description: Data Representativity for Machine Learning and AI Systems

Created At: 14 December 2024

Updated At: 14 December 2024

Reinforcement Learning for Precision Oncology

Description: Reinforcement Learning for Precision Oncology

Created At: 14 December 2024

Updated At: 14 December 2024

Seven Tools of Causal Inference with Reflections on Machine Learning

Description: Seven Tools of Causal Inference with Reflections on Machine Learning

Created At: 14 December 2024

Updated At: 14 December 2024

First 14 15 16 17 18 19 20 Last