×
AI Breakthrough: Language Models without Matrix Multiplication, Slashing Power Consumption
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Researchers claim a breakthrough in AI efficiency by eliminating matrix multiplication, a fundamental operation in current neural networks, which could significantly reduce the power consumption and costs of running large language models.

Key Takeaways:

  • Researchers from UC Santa Cruz, UC Davis, LuxiTech, and Soochow University have developed a method to run AI language models without using matrix multiplication (MatMul), which is currently accelerated by power-hungry GPU chips.
  • Their custom 2.7 billion parameter model achieved similar performance to conventional large language models while consuming far less power when run on an FPGA chip.
  • This development challenges the prevailing paradigm that matrix multiplication is indispensable for building high-performing language models and could make them more accessible, efficient, and sustainable.

Implications for the AI industry: The findings could have significant ramifications for the environmental impact and operational costs of AI systems:

  • GPUs, particularly those from Nvidia, currently dominate the AI hardware market due to their ability to quickly perform matrix multiplication in parallel, but this new approach may disrupt that status quo.
  • By reducing the power consumption of running large language models, this technique could help mitigate concerns about the growing energy footprint of the AI industry as it scales up.
  • Making large language models more efficient could also enable their deployment on resource-constrained devices like smartphones, expanding their potential applications.

Building upon previous work: The researchers cite BitNet, a “1-bit” transformer technique, as an important precursor to their work:

  • BitNet demonstrated the viability of using binary and ternary weights in language models, successfully scaling up to 3 billion parameters while maintaining competitive performance.
  • However, BitNet still relied on matrix multiplications in its self-attention mechanism, which motivated the researchers to develop a completely “MatMul-free” architecture.

Broader Implications:

While the paper has not yet been peer-reviewed, if the claims hold up, this development could mark a significant shift in how AI systems are designed and operated. By fundamentally redesigning the core computational operations of neural networks, the researchers are challenging long-held assumptions about the necessity of matrix multiplication for high-performance AI.

This work opens up new possibilities for more efficient, sustainable, and accessible AI systems. However, key questions remain about the scalability and generalizability of this approach across different types of AI models and real-world applications. Further research and validation will be needed to fully understand the potential impact of this new paradigm.

Researchers upend AI status quo by eliminating matrix multiplication in LLMs

Recent News

Time Partners with OpenAI, Joining Growing Trend of Media Companies Embracing AI

Time partners with OpenAI, joining a growing trend of media companies leveraging AI to enhance journalism and expand access to trusted information.

AI Uncovers EV Adoption Barriers, Sparking New Climate Research Opportunities

Innovative AI analysis reveals critical barriers to electric vehicle adoption, offering insights to accelerate the transition.

AI Accelerates Disease Diagnosis: Earlier Detection, Novel Biomarkers, and Personalized Insights

From facial temperature patterns to blood biomarkers, AI is enabling earlier detection and uncovering novel indicators of disease.