×
AI Pioneer Warns of Secretive LLMs, Advocates for User-Owned Alternative
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Illia Polosukhin, a key contributor to the development of transformers, is concerned about the secretive and profit-driven nature of large language models (LLMs) and aims to create an open source, user-owned AI model to ensure transparency and accountability.

Key concerns with current LLMs: Polosukhin believes that the lack of transparency in LLMs, even from companies founded on openness, poses risks as the technology improves:

  • The data used to train models and the model weights are often unknown, making it difficult to assess potential biases and decision-making processes.
  • As models become more sophisticated, they may be better at manipulating people and generating revenue for the companies that control them.

Limitations of regulation: Polosukhin has little faith in the ability of regulators to effectively oversee and limit the development of LLMs:

  • The complexity of the models makes it challenging for regulators to assess safety margins and parameters, often requiring them to rely on the companies themselves for guidance.
  • Larger companies are adept at influencing regulatory bodies, potentially leading to a situation where “the watchers are the watchees.”

The case for user-owned AI: As an alternative, Polosukhin proposes an open source, decentralized model with a neutral platform that aligns incentives and allows for community ownership:

  • Developers are already using Polosukhin’s Near Foundation platform to create applications that could work on this open source model, with an incubation program in place to support startups in the effort.
  • A promising application is a system for distributing micropayments to creators whose content feeds AI models, addressing intellectual property concerns.

Challenges and concerns: Implementing a user-owned AI model faces several obstacles:

  • Funding the development of a sophisticated foundation model from scratch remains a significant challenge, with no clear source of investment identified.
  • The potential for bad actors to abuse openly accessible powerful models is a persistent concern, although Polosukhin argues that open systems are not inherently worse than the current situation.

The urgency of action: Both Polosukhin and his collaborator, Jacob Uszkoreit, believe that if user-owned AI does not emerge before the development of artificial general intelligence, the consequences could be disastrous:

  • If a single corporation or a small group of companies control a “money-printing machine” in the form of self-improving AI, it could create a zero-sum game that destabilizes the economy and concentrates power in the hands of a few.

Reflection on the transformers breakthrough: Despite the potential risks associated with the advancement of AI, Polosukhin does not regret his role in the development of transformers, believing that the breakthrough would have occurred regardless of his involvement and that user-owned AI can help level the playing field and mitigate risks.

He Helped Invent Generative AI. Now He Wants to Save It

Recent News

Time Partners with OpenAI, Joining Growing Trend of Media Companies Embracing AI

Time partners with OpenAI, joining a growing trend of media companies leveraging AI to enhance journalism and expand access to trusted information.

AI Uncovers EV Adoption Barriers, Sparking New Climate Research Opportunities

Innovative AI analysis reveals critical barriers to electric vehicle adoption, offering insights to accelerate the transition.

AI Accelerates Disease Diagnosis: Earlier Detection, Novel Biomarkers, and Personalized Insights

From facial temperature patterns to blood biomarkers, AI is enabling earlier detection and uncovering novel indicators of disease.