Publication Library

Publication Library

Data Classification Concepts and Considerations for Improving Data Collection

Description: Data classification is the process an organization uses to characterize its data assets using persistent labels so those assets can be managed properly. Data classification is vital for protecting an organization’s data at scale because it enables the application of cybersecurity and privacy protection requirements to the organization’s data assets. This publication defines basic terminology and explains fundamental concepts in data classification so there is a common language for all to use. It can also help organizations improve the quality and efficiency of their data protection approaches by becoming more aware of data classification considerations and taking them into account in business and mission use cases, such as secure data sharing, compliance reporting and monitoring, zero-trust architecture, and large language models.

Created At: 14 December 2024

Updated At: 14 December 2024

Secure Software Development Practices for Generative AI and Dual-Use Foundation Models

Description: This document augments the secure software development practices and tasks defined in Secure Software Development Framework (SSDF) version 1.1 by adding practices, tasks, recommendations, considerations, notes, and informative references that are specific to AI model development throughout the software development life cycle. These additions are documented in the form of an SSDF Community Profile to support Executive Order (EO) 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which tasked NIST with “developing a companion resource to the [SSDF] to incorporate secure development practices for generative AI and for dual-use foundation models.” This Community Profile is intended to be useful to the producers of AI models, the producers of AI systems that use those models, and the acquirers of those AI systems. This Profile should be used in conjunction with NIST Special Publication (SP) 800-218, Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities.

Created At: 14 December 2024

Updated At: 14 December 2024

Random Forests

Description: Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Freund and Schapire[1996]), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.

Created At: 14 December 2024

Updated At: 14 December 2024

Foundations of the Theory of Probability - Kolmogorov

Description: Foundations of the Theory of Probability - Kolmogorov

Created At: 14 December 2024

Updated At: 14 December 2024

INTRABENCH Interactive Radiological Benchmark

Description: Current interactive segmentation approaches, inspired by the success of META’s Segment Anything model, have achieved notable advancements, however they comewithsubstantial limitations that hinder their practical application in real clinical scenarios. These include unrealistic human interaction requirements, such as slice-by-slice operations for 2D models on 3D data, a lack of iterative refinement, and insufficient evaluation experiments. These shortcomings prevent accurate assessment of model performance and lead to inconsistent outcomes across studies. IntRaBench overcomes these challenges by offering a comprehensive and reproducible framework for evaluating interactive segmentation methods in realistic, clinically relevant scenarios. It includes diverse datasets, target structures, and segmentation models, and provides a flexible codebase that allows seamless integration of new models and prompting strategies. Additionally, we introduce advanced techniques to minimize clinician interaction, ensuring fair comparisons between 2Dand3Dmodels. Byopen-sourcingIntRaBench, weinvite the research community to integrate their models and prompting techniques, ensuring continuous and transparent evaluation of interactive segmentation models in 3D medical imaging.

Created At: 14 December 2024

Updated At: 14 December 2024

First 36 37 38 39 40 41 42 Last

DSA Bot

Hello guy😘
I'm here to help you😎
rasa-chatbot-logo