Publication Library

Publication Library

Reallocation of Temporary Identities Applying 5G Cybersecurity and Privacy Capabilities

Description: This white paper is part of a series called Applying 5G Cybersecurity and Privacy Capabilities, which covers 5G cybersecurity- and privacy-supporting capabilities that were implemented as part of the 5G Cybersecurity project at the National Cybersecurity Center of Excellence (NCCoE). This white paper provides additional details regarding how 5G protects subscriber identities (IDs). It focuses on how the network reallocates temporary IDs to protect users from being identified and located by an attacker. Unlike previous generations of cellular systems, new requirements in 5G explicitly define when the temporary ID must be reallocated (refreshed). 5G network operators should be aware of how this standards-defined security capability protects their users and subscribers. Operators should ensure that their 5G technologies are refreshing temporary identities as described in the 5G standards.

Created At: 15 December 2024

Updated At: 15 December 2024

Applying 5G Cybersecurity and Privacy Capabilities

Description: This document introduces the white paper series titled Applying 5G Cybersecurity and Privacy Capabilities. This series is being published by the National Cybersecurity Center of Excellence (NCCoE) 5G Cybersecurity project. Each paper in the series will include information, guidance, and research findings for an individual technical cybersecurity- or privacy-supporting capability available in 5G systems or their supporting infrastructures. Each of the capabilities has been implemented in a testbed as part of the NCCoE project, and each white paper reflects the results of that implementation and its testing.

Created At: 15 December 2024

Updated At: 15 December 2024

Model Swarms Collaborative Search to Adapt LLM Experts via Swarm Intelligence

Description: We propose MODEL SWARMS, a collaborative search algorithm to adapt LLMs via swarmintelligence, the collective behavior guiding individual systems. Specifically, MODEL SWARMS starts with a pool of LLM experts and a utility function. Guided by the best-found checkpoints across models, diverse LLM experts collaboratively move in the weight space and optimize a utility function representing model adaptation objectives. Compared to existing model composition approaches, MODEL SWARMS offers tuning-free model adaptation, works in lowdata regimes with as few as 200 examples, and does not require assumptions about specific experts in the swarm or how they should be composed. Extensive experiments demonstrate that MODEL SWARMS could flexibly adapt LLM experts to a single task, multi-task domains, reward models, as well as diverse human interests, improving over 12 model composition baselines by up to 21.0% across tasks and contexts. Further analysis reveals that LLM experts discover previously unseen capabilities in initial checkpoints and that MODEL SWARMS enable the weak-to-strong transition of experts through the collaborative search process.

Created At: 14 December 2024

Updated At: 14 December 2024

Biased AI can Influence Political Decision-Making

Description: As modern AI models become integral to everyday tasks, concerns about their inherent biases and their potential impact on human decision-making have emerged. While bias in models are well-documented, less is known about how these biases influence human decisions. This paper presents two interactive experiments investigating the effects of partisan bias in AI language models on political decision-making. Participants interacted freely with either a biased liberal, biased conservative, or unbiased control model while completing political decision-making tasks. We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI’s bias, regardless of their personal political partisanship. However, we also discovered that prior knowledge about AI could lessen the impact of the bias, highlighting the possible importance of AI education for robust bias mitigation. Our findings not only highlight the critical effects of interacting with biased AI and its ability to impact public discourse and political conduct, but also highlights potential techniques for mitigating these risks in the future.

Created At: 14 December 2024

Updated At: 14 December 2024

Carrot and Stick Eliciting Comparison Data and Beyond

Description: Comparison data elicited from people are fundamental to many machine learning tasks, including reinforcement learning from human feedback for large language models and estimating ranking models. They are typically subjective and not directly verifiable. How to truthfully elicit such comparison data from rational individuals? We design peer prediction mechanisms for eliciting comparison data using a bonus-penalty payment [11]. Our design leverages on the strong stochastic transitivity for comparison data [60, 13] to create symmetrically strongly truthful mechanisms such that truth-telling 1) forms a strict Bayesian Nash equilibrium, and 2) yields the highest payment among all symmetric equilibria. Each individual only needs to evaluate one pair of items and report her comparison in our mechanism. Wefurther extend the bonus-penalty payment concept to eliciting networked data, designing a symmetrically strongly truthful mechanism when agents’ private signals are sampled according to the Ising models. We provide the necessary and sufficient conditions for our bonus-penalty payment to have truth-telling as a strict Bayesian Nash equilibrium. Experiments on two realworld datasets further support our theoretical discoveries.

Created At: 14 December 2024

Updated At: 14 December 2024

First 27 28 29 30 31 32 33 Last