AdaMix, a parameter-efficient fine-tuning method, outperforms full model fine-tuning in few-shot NLU tasks across benchmarks like GLUE. Using prompt-based strategies without extra validation or unlabeled data, AdaMix consistently boosts performance with both BERT and RoBERTa encoders, demonstrating stability and efficiency in few-shot scenarios.AdaMix, a parameter-efficient fine-tuning method, outperforms full model fine-tuning in few-shot NLU tasks across benchmarks like GLUE. Using prompt-based strategies without extra validation or unlabeled data, AdaMix consistently boosts performance with both BERT and RoBERTa encoders, demonstrating stability and efficiency in few-shot scenarios.

Smarter AI Training with Few-Shot Natural Language Tasks

2025/10/02 17:00
Okuma süresi: 3 dk
Bu içerikle ilgili geri bildirim veya endişeleriniz için lütfen crypto.news@mexc.com üzerinden bizimle iletişime geçin.

Abstract and 1. Introduction

  1. Background

    2.1 Mixture-of-Experts

    2.2 Adapters

  2. Mixture-of-Adaptations

    3.1 Routing Policy

    3.2 Consistency regularization

    3.3 Adaptation module merging and 3.4 Adaptation module sharing

    3.5 Connection to Bayesian Neural Networks and Model Ensembling

  3. Experiments

    4.1 Experimental Setup

    4.2 Key Results

    4.3 Ablation Study

  4. Related Work

  5. Conclusions

  6. Limitations

  7. Acknowledgment and References

Appendix

A. Few-shot NLU Datasets B. Ablation Study C. Detailed Results on NLU Tasks D. Hyper-parameter

A Few-shot NLU Datasets

Data. In contrast to the fully supervised setting in the above experiments, we also perform fewshot experiments following the prior study (Wang et al., 2021) on six tasks including MNLI (Williams et al., 2018), RTE (Dagan et al., 2005; Bar Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), QQP[1] and SST-2 (Socher et al.). The results are reported on their development set following (Zhang et al., 2021). MPQA (Wiebe et al., 2005) and Subj (Pang and Lee, 2004) are used for polarity and subjectivity detection, where we follow (Gao et al., 2021) to keep 2, 000 examples for testing. The few-shot model only has access to |K| labeled samples for any task. Following true few-shot learning setting (Perez et al., 2021; Wang et al., 2021), we do not use any additional validation set for any hyper-parameter tuning or early stopping. The performance of each model is reported after fixed number of training epochs. For a fair comparison, we use the same set of few-shot labeled instances for training as in (Wang et al., 2021). We train each model with 5 different seeds and report average performance with standard deviation across the runs. In the few-shot experiments, we follow (Wang et al., 2021) to train AdaMix via the prompt-based fine-tuning strategy. In contrast to (Wang et al., 2021), we do not use any unlabeled data.

\

B Ablation Study

\ Table 11: Ablation study demonstrating the impact of parameter sharing in AdaMix adapter framework.

\

C Detailed Results on NLU Tasks

The results on NLU tasks are included in Table 1 and Table 13. The performance AdaMix with RoBERTa-large encoder achieves the best performance in terms of different task metrics in the GLUE benchmark. AdaMix with adapters is the

\ \ Table 12: Varying the bottleneck dimension of adapters in AdaMix with BERT-base and RoBERTa-large encoder. * denotes the bottleneck dimension used in AdaMix with adapters.

\ \ only PEFT method which outperforms full model fine-tuning on all the tasks and on average score. Additionally, the improvement brought by AdaMix is more significant with BERT-base as the encoder, demonstrating 2.2% and 1.2% improvement over the performance of full model fine-tuning and the best performing baseline UNIPELT with BERTbase. The improvement is observed to be consistent as that with RoBERTa-large on every task. The NLG results are included in Table 4 and 5.

D Hyper-parameter

Detailed hyper-parameter configuration for different tasks presented in Table 15 and Table 16.

\

:::info Authors:

(1) Yaqing Wang, Purdue University (wang5075@purdue.edu);

(2) Sahaj Agarwal, Microsoft (sahagar@microsoft.com);

(3) Subhabrata Mukherjee, Microsoft Research (submukhe@microsoft.com);

(4) Xiaodong Liu, Microsoft Research (xiaodl@microsoft.com);

(5) Jing Gao, Purdue University (jinggao@purdue.edu);

(6) Ahmed Hassan Awadallah, Microsoft Research (hassanam@microsoft.com);

(7) Jianfeng Gao, Microsoft Research (jfgao@microsoft.com).

:::


:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

[1] https://www.quora.com/q/quoradata/

Piyasa Fırsatı
Sleepless AI Logosu
Sleepless AI Fiyatı(SLEEPLESSAI)
$0.02066
$0.02066$0.02066
+11.25%
USD
Sleepless AI (SLEEPLESSAI) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen crypto.news@mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

BDACS, Woori Bank Launch South Korea’s First Won-Backed Stablecoin on Avalanche

BDACS, Woori Bank Launch South Korea’s First Won-Backed Stablecoin on Avalanche

The post BDACS, Woori Bank Launch South Korea’s First Won-Backed Stablecoin on Avalanche appeared on BitcoinEthereumNews.com. In brief Digital asset custodian BDACS has launched KRW1, South Korea’s first fully regulated won-backed stablecoin, through a partnership with Woori Bank. Each token maintains full collateralization with Korean won held in Woori Bank escrow, according to BDACS. The launch comes amid competing parliamentary bills that debate interest payments and capital requirements for stablecoin issuers. Digital asset custodian BDACS has launched KRW1, South Korea’s first fully regulated won-backed stablecoin, in partnership with Woori Bank. The announcement follows completion of a proof of concept validating technical infrastructure spanning fiat deposits, token issuance, and blockchain verification, as per a Thursday press release. Each KRW1 token maintains full collateralization through South Korean won held in escrow at Woori Bank, with real-time banking API integration providing transparent proof of reserves, according to BDACS’ statement. The company trademarked the KRW1 brand in December 2023, building infrastructure before the advent of formal regulations. KRW1 launched on the Avalanche blockchain, chosen for its “high-performance capabilities” and recognition by Korea’s Internet & Security Agency for “reliability in public-sector applications.” “The successful test pilot of KRW1 demonstrates the need for a highly-performant and reliable blockchain tailored for a regulatory-compliant stablecoin,” Justin Kim, Head of Asia at Ava Labs, said in the statement. BDACS envisions KRW1 serving remittances, payments, investments, and deposits, with public-sector deployment planned for low-cost payment and settlement systems in emergency relief disbursements. The company plans to expand KRW1 to additional blockchains and explore collaborations with global stablecoin networks, including potential partnerships with USD-backed issuers Circle and Tether, according to the press release. Stablecoins in Asia South Korean internet giant Kakao is also developing a won-pegged token through its Kaia blockchain, having registered trademarks including “KRWGlobal” and “KRWKaia” in August, Decrypt reported earlier. The launch comes as Korea’s neighbors advance their own stablecoin initiatives, with Japan’s JPYC…
Paylaş
BitcoinEthereumNews2025/09/18 19:28
Ripple CEO Reacts to BBB Rating for Ripple Prime, Lists Three Points It Validates

Ripple CEO Reacts to BBB Rating for Ripple Prime, Lists Three Points It Validates

The post Ripple CEO Reacts to BBB Rating for Ripple Prime, Lists Three Points It Validates appeared on BitcoinEthereumNews.com. Brad Garlinghouse, CEO of Ripple
Paylaş
BitcoinEthereumNews2026/04/03 11:28
US Dollar Index (DXY) Forecast: Critical Double Top Pattern Looms at 100.60 Resistance

US Dollar Index (DXY) Forecast: Critical Double Top Pattern Looms at 100.60 Resistance

BitcoinWorld US Dollar Index (DXY) Forecast: Critical Double Top Pattern Looms at 100.60 Resistance Financial analysts are closely monitoring the US Dollar Index
Paylaş
bitcoinworld2026/04/03 10:35

Trade GOLD, Share 1,000,000 USDT

Trade GOLD, Share 1,000,000 USDTTrade GOLD, Share 1,000,000 USDT

0 fees, up to 1,000x leverage, deep liquidity