Contactos
Información
Daily podcast about the published articles in the LLM field.
18 NOV. 2024 · ⚖️ Scaling Laws for Precision
This research paper investigates the impact of precision in training and inference on the performance of large language models. The authors explore how precision affects the effective parameter count and propose scaling laws that predict performance degradation due to low-precision training and post-training quantization. They find that overtrained models are more sensitive to post-training quantization, and that training larger models in lower precision might be computationally optimal. Their unified scaling law accounts for both training and post-training effects and predicts loss in varied precision settings, ultimately suggesting that the standard practice of training models in 16-bit might be suboptimal.
📎 https://arxiv.org/abs/2411.04330
🌐 https://x.com/tanishqkumar07/status/1856045600355352753
14 NOV. 2024 · ⌛️ The Surprising Effectiveness of Test-Time Training for Abstract Reasoning
This paper examines how test-time training (TTT) can enhance the abstract reasoning abilities of large language models (LLMs). TTT, which updates model parameters during inference, significantly improves performance on the Abstraction and Reasoning Corpus (ARC) benchmark. Key factors for effective TTT include initial fine-tuning, auxiliary tasks, and instance-specific training. The approach achieves state-of-the-art results on ARC, even matching human averages with program synthesis. This study suggests that dedicating computation at test time, rather than relying on symbolic components, may be essential for complex reasoning tasks.
📎 https://ekinakyurek.github.io/papers/ttt.pdf
12 NOV. 2024 · 🔷 Qwen2.5-Coder Technical Report
The report introduces the Qwen2.5-Coder series, which includes the Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B models. These models are specifically designed for coding tasks and have been pre-trained on a massive dataset of 5.5 trillion code-related tokens. A significant focus is placed on data quality, with detailed cleaning and filtering processes, and advanced training techniques such as file-level and repo-level pre-training. The models were rigorously tested on various benchmarks, including code generation, completion, reasoning, repair, and text-to-SQL tasks, where they demonstrated strong performance, even surpassing larger models in some areas. The report concludes with suggestions for future research, such as scaling model size and enhancing reasoning abilities.
📎 https://arxiv.org/abs/2409.12186
9 NOV. 2024 · 😈 Attacking Vision-Language Computer Agents via Pop-ups
This research paper examines vulnerabilities in vision-language models (VLMs) that power autonomous agents performing computer tasks. The authors show that these VLM agents can be easily tricked into clicking on carefully crafted malicious pop-ups, which humans would typically recognize and avoid. These deceptive pop-ups mislead the agents, disrupting their task performance and reducing success rates. The study tests various pop-up designs across different VLM agents and finds that even simple countermeasures, such as instructing the agent to ignore pop-ups, are ineffective. The authors conclude that these vulnerabilities highlight serious security risks and call for more robust safety measures to ensure reliable agent performance.
📎 https://arxiv.org/abs/2411.02391
8 NOV. 2024 · 📓 Number Cookbook: Number Understanding of Language Models and How to Improve It
This research paper examines the numerical understanding and processing abilities (NUPA) of large language models (LLMs). The authors create a benchmark to test LLMs on four numerical representations (integers, floating-point numbers, fractions, and scientific notation) across 17 tasks grouped into four ability categories. They find that, despite strong problem-solving capabilities, LLMs struggle with basic numerical operations. The paper evaluates methods to enhance NUPA during pretraining and finetuning, such as specialized tokenizers, positional encodings, and data formats, and notes the limitations of chain-of-thought techniques for numerical tasks. The authors call for further research to improve LLMs' fundamental numerical capabilities.
📎 https://arxiv.org/abs/2411.03766
7 NOV. 2024 · 🧩 Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models
This research paper investigates the vulnerabilities of large language models (LLMs) to "jailbreak" attacks, where malicious users attempt to trick the model into generating harmful content. The authors propose a new attack strategy called Jigsaw Puzzles (JSP) which breaks down harmful questions into harmless fractions and feeds them to the LLM in multiple turns, bypassing the model's built-in safeguards. The paper explores the effectiveness of JSP across different LLM models and harmful categories, analyzing the role of various prompt designs and splitting strategies. The authors also compare JSP's performance to other existing jailbreak methods and demonstrate its ability to overcome various defense mechanisms. The paper concludes by highlighting the importance of continued research and development of more robust defenses against such attacks.
📎 https://arxiv.org/abs/2410.11459
5 NOV. 2024 · 🤝 Multi-expert Prompting with LLMs
The research paper presents Multi-expert Prompting, a novel method for improving the reliability, safety, and usefulness of Large Language Models (LLMs). Multi-expert Prompting simulates multiple experts within an LLM, collecting their answers to an instruction and aggregating them into a final response. This process leverages the Nominal Group Technique, a human-designed decision-making framework, to ensure a balanced and comprehensive output, surpassing the limitations of single-expert approaches. The authors demonstrate the method’s effectiveness through thorough evaluation on various benchmarks, highlighting its significant improvements in truthfulness, factuality, toxicity reduction, and overall informativeness compared to existing baselines.
📎 https://arxiv.org/abs/2411.00492
3 NOV. 2024 · 🔎 Investigating the Role of Prompting and External Tools in Hallucination Rates of Large Language Models
This paper examines the effectiveness of different prompting techniques and frameworks for mitigating hallucinations in large language models (LLMs). The authors investigate how these techniques, including Chain-of-Thought, Self-Consistency, and Multiagent Debate, can improve reasoning capabilities and reduce factual inconsistencies. They also explore the impact of LLM agents, which are AI systems designed to perform complex tasks by combining LLMs with external tools, on hallucination rates. The study finds that the best strategy for reducing hallucinations depends on the specific NLP task, and that while external tools can extend the capabilities of LLMs, they can also introduce new hallucinations.
📎 https://arxiv.org/abs/2410.19385
2 NOV. 2024 · 🌀 Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse
This research paper examines how chain-of-thought (CoT) prompting—encouraging models to reason step-by-step—affects large language and multimodal model performance across tasks. While CoT generally boosts performance, the authors find it significantly hampers model accuracy in three specific contexts: implicit statistical learning, facial recognition, and classifying data with exceptions. The paper suggests a similarity between CoT and human verbal reasoning, proposing that tasks where deliberate thinking harms human performance may similarly impair models using CoT. The study concludes that recognizing scenarios where reasoning is counterproductive for humans can highlight situations where CoT also hinders model effectiveness.
📎 https://arxiv.org/abs/2410.21333
31 OCT. 2024 · ❓Measuring short-form factuality in large language models
This document introduces SimpleQA, a new benchmark for evaluating the factuality of large language models. The benchmark consists of over 4,000 short, fact-seeking questions designed to be challenging for advanced models, with a focus on ensuring a single, indisputable answer. The authors argue that SimpleQA is a valuable tool for assessing whether models "know what they know", meaning their ability to correctly answer questions with high confidence. They further explore the calibration of language models, investigating the correlation between confidence and accuracy, as well as the consistency of responses when the same question is posed multiple times. The authors conclude that SimpleQA provides a valuable framework for evaluating the factuality of language models and encourages the development of more trustworthy and reliable models.
📎 https://cdn.openai.com/papers/simpleqa.pdf
🌐 https://openai.com/index/introducing-simpleqa/
Daily podcast about the published articles in the LLM field.
Información
Autor | Shahriar Shariati |
Organización | Shahriar Shariati |
Categorías | Tecnología , Matemáticas , Noticias tecnológicas |
Página web | - |
shahriarshm81@gmail.com |
Copyright 2024 - Spreaker Inc. an iHeartMedia Company