This AI paper from NYU and Meta AI introduces LIFT: Instructional Control for Length-Instruction Fine-Tuning and Instruction-Following LLM for Quality.

https://arxiv.org/abs/2406.17744
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence (AI) has advanced significantly with the development of large language models (LLMs) that process user instructions. These models aim to provide accurate and relevant answers to human questions, often requiring fine-tuning to increase their performance in various applications, such as customer service, information retrieval, and content creation. The ability to accurately instruct these models has become a cornerstone of modern AI, pushing the limits of what these systems can achieve in practical scenarios.

A challenge in developing and testing post-instruction models is the inherent length bias. This bias arises because human evaluators and training algorithms tend to favor long answers, resulting in models that produce unnecessarily long results. This preference complicates assessment of model quality and effectiveness, as longer answers are sometimes more informative or accurate. Consequently, the challenge is to develop models that understand the instructions and ensure that they can generate responses of the appropriate length.

Current methods of addressing length bias include adding length penalties to assessment criteria. For example, AlpacaEval and MT-Bench integrate these penalties to combat the tendency of models to produce long responses. Additionally, various fine-tuning techniques, such as reinforcement learning with human feedback (RLHF), are used to optimize the models for better instruction-following capabilities. These methods aim to improve the models' ability to generate comprehensive and comprehensive responses, balancing the length and quality of the output.

Meta FAIR and New York University researchers have introduced a new method called Length-Instruction Fine-Tuning (LIFT), which involves augmenting training data with explicit length instructions. This method enables models to be controlled at estimation time to adhere to certain length constraints. The research team, including members of Meta FAIR and New York University, designed the approach to reduce length bias and improve models' adherence to length-specific instructions. Models learn to respect these constraints during real-world applications by adding detailed instructions to the training data.

The LIFT method incorporates direct preference optimization (DPO) to fit models using improved data sets with length instructions. This process begins with expanding the dataset following conventional instructions by inserting length constraints at the prompt. The method creates preference pairs that reflect both length constraints and response quality. These expanded datasets are then used to fine-tune models, such as Llama 2 and Llama 3, ensuring that they can handle prompts with and without length instructions. This systematic approach allows the models to learn from different instructions, increasing their ability to generate accurate and reasonably comprehensive responses.

The proposed LIFT-DPO models showed superior performance in handling length constraints compared to existing state-of-the-art models such as GPT-4 and Llama 3. For example, the researchers found that the GPT-4 turbo model nearly violated the length constraints. 50% of the time, highlighting a critical flaw in its design. In contrast, LIFT-DPO models showed significantly lower violation rates. Specifically, the Llama-2-70B-base model, when subjected to standard DPO training, showed a violation rate of 65.8% on Alpaca Evil-LI, which decreased dramatically with LIFT-DPO training. 7.1%. Similarly, the violation rate of the Llama-2-70B-Chat model decreased from 15.1% with standard DPO to 2.7% with LIFT-DPO, indicating the effectiveness of the method in controlling response length. she does.

Additionally, LIFT-DPO models maintained high response quality while adhering to length constraints. Win rates improved significantly, indicating that the models could generate high-quality responses within specified length ranges. For example, the win rate of the Llama-2-70B-Base model increased from 4.6% with standard DPO to 13.6% with LIFT-DPO. These results indicate the success of the method in balancing length control and response quality, providing a robust solution to length-biased assessments.

Finally, the research addresses the issue of length bias in post-instruction models by introducing the LIFT method. This approach increases the robustness and quality of the model response by integrating length constraints into the training process. The results show that the LIFT-DPO models outperform traditional methods, providing more reliable and efficient solutions for the following length-constrained instructions. The collaboration between Meta FAIR and New York University has significantly improved the development of AI models that can generate comprehensive, high-quality responses, setting a new standard for instructional capabilities in AI research.


check Paper All credit for this research goes to the researchers of this project. Also, don't forget to follow us. Twitter.

Join us. Telegram channel And LinkedIn GrTop.

If you like our work, you will like our work. Newsletter..

Don't forget to join us. 46k+ ML SubReddit

Nikhil is an intern consultant at Marktech Post. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in materials science, he is exploring new developments and creating partnership opportunities.

🐝 Join the fastest growing AI research newsletter read by researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many more.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment