Publication Library

Publication Library

DNMDR Dynamic Networks and Multi-view Drug Representations for Safe Medication Recommendation

Description: Medication Recommendation (MR) is a promising research topic which booms diverse applications in the healthcare and clinical domains. However, existing methods mainly rely on sequential modeling and static graphs for representation learning, which ignore the dynamic correlations in diverse medical events of a patient's temporal visits, leading to insufficient global structural exploration on nodes. Additionally, mitigating drug-drug interactions (DDIs) is another issue determining the utility of the MR systems. To address the challenges mentioned above, this paper proposes a novel MR method with the integration of dynamic networks and multi-view drug representations (DNMDR). Specifically, weighted snapshot sequences for dynamic heterogeneous networks are constructed based on discrete visits in temporal EHRs, and all the dynamic networks are jointly trained to gain both structural correlations in diverse medical events and temporal dependency in historical health conditions, for achieving comprehensive patient representations with both semantic features and structural relationships. Moreover, combining the drug co-occurrences and adverse drug-drug interactions (DDIs) in internal view of drug molecule structure and interactive view of drug pairs, the safe drug representations are available to obtain high-quality medication combination recommendation. Finally, extensive experiments on real world datasets are conducted for performance evaluation, and the experimental results demonstrate that the proposed DNMDR method outperforms the state-of-the-art baseline models with a large margin on various metrics such as PRAUC, Jaccard, DDI rates and so on.

Created At: 16 January 2025

Updated At: 16 January 2025

deepTerra -- AI Land Classification Made Easy

Description: deepTerra is a comprehensive platform designed to facilitate the classification of land surface features using machine learning and satellite imagery. The platform includes modules for data collection, image augmentation, training, testing, and prediction, streamlining the entire workflow for image classification tasks. This paper presents a detailed overview of the capabilities of deepTerra, shows how it has been applied to various research areas, and discusses the future directions it might take.

Created At: 16 January 2025

Updated At: 16 January 2025

SAR Strikes Back A New Hope for RSVQA

Description: Remote sensing visual question answering (RSVQA) is a task that automatically extracts information from satellite images and processes a question to predict the answer from the images in textual form, helping with the interpretation of the image. While different methods have been proposed to extract information from optical images with different spectral bands and resolutions, no method has been proposed to answer questions from Synthetic Aperture Radar (SAR) images. SAR images capture electromagnetic information from the scene, and are less affected by atmospheric conditions, such as clouds. In this work, our objective is to introduce SAR in the RSVQA task, finding the best way to use this modality. In our research, we carry out a study on different pipelines for the task of RSVQA taking into account information from both SAR and optical data. To this purpose, we also present a dataset that allows for the introduction of SAR images in the RSVQA framework. We propose two different models to include the SAR modality. The first one is an end-to-end method in which we add an additional encoder for the SAR modality. In the second approach, we build on a two-stage framework. First, relevant information is extracted from SAR and, optionally, optical data. This information is then translated into natural language to be used in the second step which only relies on a language model to provide the answer. We find that the second pipeline allows us to obtain good results with SAR images alone. We then try various types of fusion methods to use SAR and optical images together, finding that a fusion at the decision level achieves the best results on the proposed dataset. We show that SAR data offers additional information when fused with the optical modality, particularly for questions related to specific land cover classes, such as water areas.

Created At: 16 January 2025

Updated At: 16 January 2025

Physical AI Agents Integrating Cognitive Intelligence with Real-World Action

Description: Vertical AI Agents are revolutionizing industries by delivering domain-specific intelligence and tailored solutions. However, many sectors, such as manufacturing, healthcare, and logistics, demand AI systems capable of extending their intelligence into the physical world, interacting directly with objects, environments, and dynamic conditions. This need has led to the emergence of Physical AI Agents--systems that integrate cognitive reasoning, powered by specialized LLMs, with precise physical actions to perform real-world tasks. This work introduces Physical AI Agents as an evolution of shared principles with Vertical AI Agents, tailored for physical interaction. We propose a modular architecture with three core blocks--perception, cognition, and actuation--offering a scalable framework for diverse industries. Additionally, we present the Physical Retrieval Augmented Generation (Ph-RAG) design pattern, which connects physical intelligence to industry-specific LLMs for real-time decision-making and reporting informed by physical context. Through case studies, we demonstrate how Physical AI Agents and the Ph-RAG framework are transforming industries like autonomous vehicles, warehouse robotics, healthcare, and manufacturing, offering businesses a pathway to integrate embodied AI for operational efficiency and innovation.

Created At: 16 January 2025

Updated At: 16 January 2025

SITUATIONAL AWARENESS The Decade Ahead

Description: SITUATIONAL AWARENESS: The Decade Ahead

Created At: 01 January 2025

Updated At: 01 January 2025

First 22 23 24 25 26 27 28 Last