Yann LeCun’s version of AGI vs SingularityNET’s

This presentation at the Paris-based AI Action Summit, given by expert AI researcher Yann LeCun, argues that true Artificial General Intelligence (AGI) will not be achieved using Large Language Models (LLMs), but rather through World Models like JEPA (Joint Embedding Predictive Architecture).

Key Takeaways:

  1. Move away from traditional AI methods
    • He suggests abandoning:
      • Generative models (like GPT) in favor of architectures that embed knowledge more efficiently.
      • Probabilistic models (which predict based on probability) in favor of energy-based models (which focus on optimization).
      • Contrastive methods (which compare differences in data) in favor of regularized methods (which make learning more stable).
      • Reinforcement Learning (RL) in favor of model-predictive control (which plans ahead rather than learning through trial and error).
  2. Limited role of Reinforcement Learning (RL)
    • RL should only be used when predictions fail, to correct the world model or the critic (a system that evaluates decisions).
  3. A Strong Message Against LLMs for AGI
    • He explicitly states: “IF YOU ARE INTERESTED IN HUMAN-LEVEL AI, DON’T WORK ON LLMs.”
    • This suggests that LLMs (like ChatGPT) are fundamentally limited and will not lead to AGI.

LeCun’s approach to AI development has some alignment with SingularityNET’s goals, but there are also key philosophical and technical differences. Let’s break it down:

Where They Align:

  1. Moving Beyond Traditional Deep Learning
    • LeCun argues that LLMs (like ChatGPT) won’t lead to AGI because they lack true reasoning, planning, and world modeling.
    • SingularityNET (SNET) also sees current AI as narrow and lacking general intelligence, which is why they advocate for decentralized AGI that is more dynamic and autonomous.
  2. Importance of World Models & Energy-Based Learning
    • LeCun’s JEPA model (Joint Embedding Predictive Architecture) focuses on learning abstract representations of the world, rather than just predicting the next word in a sentence (as LLMs do).
    • SingularityNET’s OpenCog Hyperon also aims to develop symbolic and connectionist AI that can reason, plan, and build mental models of the world, making it closer to LeCun’s vision than traditional LLMs.
  3. AI Needs to Generalize and Learn from Experience
    • LeCun supports AI that self-improves by interacting with the world, rather than just memorizing data.
    • SNET’s approach also includes self-organizing AI networks, where different AI agents interact and learn collaboratively.

Where They Differ:

  1. Centralized vs. Decentralized AI
    • LeCun’s approach focuses on building better AI architectures (like JEPA) but still assumes these would be developed in centralized research environments (e.g., Meta, large AI labs).
    • SingularityNET is fundamentally decentralized, promoting an open, distributed AI network where many independent AI systems can contribute to intelligence development.
  2. Symbolic AI & Hybrid Methods
    • SingularityNET’s OpenCog Hyperon is heavily based on symbolic AI and logical reasoning, integrating neural networks with structured symbolic models.
    • LeCun is more skeptical of symbolic AI, preferring energy-based models and predictive learning without explicit logic structures.
  3. Market vs. Research Focus
    • LeCun’s JEPA model is a research-driven paradigm, mostly focused on advancing AI architecture.
    • SingularityNET operates in a Web3 ecosystem, where AI services are offered on a blockchain-based marketplace, making it a mix of research and real-world deployment.

Final Take:

LeCun’s critique of LLMs aligns with SingularityNET’s vision for AGI, but SNET’s focus on decentralization and hybrid AI architectures sets it apart.

This article was written using ChatGPT-4, which is based on OpenAI’s GPT-4 model. Specifically, it was generated using the latest available version of GPT-4-turbo, optimized for efficiency and cost-effectiveness while maintaining high-quality outputs.

Navigating the Controversy and Competition in EU’s AI Legislation

The European Union is at a pivotal moment in shaping the future of Artificial Intelligence (AI) regulation, a journey marked by intense debate, international influences, and the looming shadow of US technological dominance. As we delve into this complex landscape, it’s crucial to understand the multifaceted aspects of the EU’s legislative process, the challenges it faces, and the global implications of its decisions.

The Current State of EU AI Legislation

The EU is in the final stages of negotiating the AI Act, a groundbreaking piece of legislation aimed at regulating AI applications, particularly those deemed high-risk. The recent trilogue discussions among the Council, Parliament, and Commission have made significant progress, especially in classifying high-risk AI applications and overseeing powerful foundation models. However, contentious issues remain, such as the specifics of prohibitions and law enforcement exceptions.

The Hiroshima AI Process and International Standards

Parallel to the EU’s efforts, the G7 leaders, under the Hiroshima AI process, have agreed on International Guiding Principles and a voluntary Code of Conduct for AI developers. These principles aim to ensure trustworthy AI development and complement the EU regulations. They focus on risk mitigation, responsible information sharing, and a labelling system for AI-generated content.

Challenges and Disagreements

Despite these advancements, the AI Act faces significant challenges. Negotiations recently hit a roadblock due to disagreements from major EU countries over the regulation of foundation models like OpenAI’s GPT-4. Countries like France, Germany, and Italy, influenced by their AI startups, fear over-regulation could hinder their competitiveness.

The Evolution of the AI Act

It’s essential to trace the AI Act’s evolution to understand its current state. Initially, the European Commission’s draft in April 2021 did not mention general-purpose AI systems or foundation models. However, feedback from stakeholders, including the Future of Life Institute, led to the inclusion of these aspects. The Act has since evolved, with various amendments focusing on high-risk AI systems and the obligations of foundation model providers.

The Role of Powerful Foundation Models

Recent developments have highlighted the need to regulate powerful foundation models. The Spanish presidency’s draft proposed obligations for these models, including registration in the EU public database and assessing systemic risks. This approach aims to balance innovation with safety and ethical considerations.

The Impact of US Competition

The EU’s legislative process is significantly influenced by the competition from US tech giants. European AI startups, like Mistral and Aleph Alpha, lag behind their US counterparts in resources and development. This disparity raises concerns about the EU’s ability to compete globally in the AI sector. The fear is that stringent regulations might further widen this gap, favoring US companies like OpenAI and Google.

Equinet and ENNHRI’s Call for Enhanced Protection

In a significant development, Equinet and ENNHRI jointly issued a statement urging policymakers to enhance protection for equality and fundamental rights within the AI Act. Their recommendations include ensuring a robust enforcement and governance framework for foundation models and high-impact foundation models, incorporating mandatory independent risk assessments, fundamental rights expertise, and stronger oversight.

Looking Ahead: The Final Trilogue and Beyond

The next trilogue session on December 6, 2023, is crucial. It will address unresolved issues and potentially shape the final form of the AI Act. The Spanish presidency aims for a full agreement by the end of 2023, but disagreements could push negotiations into 2024, especially with the European Parliament elections looming.

Conclusion

The EU’s journey in regulating AI is a delicate balancing act between fostering innovation, ensuring public safety, and maintaining competitiveness on the global stage. The outcome of the AI Act will not only shape the future of AI in Europe but also set a precedent for global AI governance. As these negotiations continue, it’s vital to keep an eye on how these regulations will evolve in response to technological advancements and international pressures.

For more detailed insights and ongoing updates, refer to the links provided:

  1. European Parliament Legislative Train
  2. Equinet and ENNHRI Joint Statement
  3. Euractiv’s Analysis
  4. AI & Partners 1 December Newsletter