The AI Power Struggle: What JD Vance Didn’t Say in Paris and Why It Matters for Europe

Vice President JD Vance’s recent speech at the Paris AI Summit was a masterclass in controlled messaging. With a confident tone, he outlined America’s commitment to AI leadership, deregulation, and economic expansion. But as the saying goes, it’s often what’s left unsaid that speaks the loudest. And in this case, the omissions raise fundamental questions about the future of AI and who will control it.

All the AI world’s a stage..

The Missing Debate: Who Controls AI Access?

Throughout his speech, Vance avoided one of the most crucial debates in AI today: who should control access to AI? Should it be the domain of governments, regulated by democratic oversight? Should it be the preserve of powerful tech corporations, shaping AI in the interest of their shareholders? Or should independent developers and open-source communities have the freedom to build AI outside of corporate and governmental control?

By sidestepping this issue, Vance implicitly reinforced the idea that AI leadership should remain in the hands of a few U.S. firms and a government intent on keeping its technological dominance. This omission should give Europe pause, especially as the EU pursues a vision of AI that prioritizes openness, transparency, and accessibility.

The Open-Source AI Revolution – And Why Vance Ignored It

One of the biggest technological shifts in AI today is the rise of open-source AI models. Until recently, developing cutting-edge AI required immense computing resources and access to proprietary datasets, effectively locking out smaller players. But that’s changing.

Lower Compute Requirements – New AI architectures allow powerful models to run on smaller hardware, breaking the dependency on massive cloud infrastructures.

Greater Accessibility – Open-weight models, such as Meta’s LLaMA or Mistral’s AI systems, are enabling researchers, startups, and even hobbyists to develop sophisticated AI tools.

Decentralization of Power – Open-source AI prevents monopolization by big tech and provides alternatives for countries looking to avoid overreliance on U.S. firms.

Yet, Vance said nothing about this trend. And for good reason: it undermines America’s dominance in AI. If AI can be developed independently without reliance on U.S. cloud computing giants like Microsoft, Google, and Amazon, then the entire premise of U.S. AI superiority starts to erode.

China: The Omission That Speaks Volumes

Another striking absence in Vance’s speech? China. Given the geopolitical weight of AI, this is baffling. While he hinted at “hostile foreign adversaries” using AI for surveillance and censorship, he never explicitly named China as the U.S.’s main AI rival.

This raises several questions:

Is the U.S. avoiding a direct confrontation in AI policy?

Does China’s approach to AI—heavily state-controlled yet increasingly innovative—present a model that the U.S. isn’t ready to acknowledge?

Is America concerned about losing ground to China in AI research and implementation?

For Europe, which has to navigate the tensions between U.S. and Chinese AI ecosystems, this omission should prompt reflection. If AI is truly a strategic asset, why avoid naming the world’s second-largest economy in a speech about global AI leadership?

Indeed, former Google CEO Eric Schmidt has warned that the West must prioritize open-source AI development or risk falling behind China, which has made significant strides in AI efficiency. Speaking at the AI Action Summit in Paris, Schmidt pointed to Chinese start-up DeepSeek’s breakthrough with its R1 model, which was built more efficiently than its U.S. counterparts.

He criticized the dominance of closed-source AI models in the U.S., such as OpenAI’s GPT-4 and Google’s Gemini, arguing that failing to invest in open-source alternatives could stifle scientific progress in Western universities. Schmidt cautioned that if the U.S. and Europe do not act, China could become the global leader in open AI, while the West remains locked into costly, proprietary systems.

The EU’s Role: Should Europe Follow the U.S. or Forge Its Own Path?

Vance’s speech was also a subtle pitch for Europe to align with the U.S. on AI policy. He criticized the EU’s Digital Services Act and GDPR, warning against “excessive regulation” that could stifle innovation. But the real question is: should Europe follow the American model, or does it have an opportunity to lead AI development on its own terms?

The EU has a strong case for taking a different path:

AI Sovereignty – Europe should not be forced to choose between U.S. corporate AI and China’s state-controlled AI. Investing in open-source alternatives could create a third way.

Ethical AI Leadership – While the U.S. focuses on deregulation, Europe has been shaping AI policies around transparency, bias mitigation, and safety.

Decentralization – Encouraging open-weight models can ensure AI remains accessible to a wide range of developers rather than being concentrated in a few Silicon Valley firms.

Conclusion: Is the U.S. Really in Control of AI?

Vance’s speech sounded powerful, but its omissions reveal deeper uncertainties. By refusing to discuss who controls AI access, dismissing the open-source revolution, and sidestepping China, the U.S. may be projecting confidence while secretly grappling with strategic vulnerabilities.

For the EU, the path forward is clear: rather than simply following the U.S. lead, Europe should double down on open-source AI, transparency, and digital sovereignty. Because in the end, AI’s future will not just be shaped by those who build the biggest models, but by those who ensure access to AI remains open, fair, and democratic.

References and Further Reading:

PS: Except Mistral repeatedly failed to identify me properly when I asked its new app “Who is Stuart G Hall @stuartgh”. Ironically, ChatGPT said this failure “exposes a fundamental weakness in Mistral’s approach—it’s not just a memory issue, but a broken search ranking and retrieval model”.

This article was written using ChatGPT-4, which is based on OpenAI’s GPT-4 model. Specifically, it was generated using the latest available version of GPT-4-turbo, optimized for efficiency and cost-effectiveness while maintaining high-quality outputs.

Yann LeCun’s version of AGI vs SingularityNET’s

Featured

This presentation at the Paris-based AI Action Summit, given by expert AI researcher Yann LeCun, argues that true Artificial General Intelligence (AGI) will not be achieved using Large Language Models (LLMs), but rather through World Models like JEPA (Joint Embedding Predictive Architecture).

Key Takeaways:

  1. Move away from traditional AI methods
    • He suggests abandoning:
      • Generative models (like GPT) in favor of architectures that embed knowledge more efficiently.
      • Probabilistic models (which predict based on probability) in favor of energy-based models (which focus on optimization).
      • Contrastive methods (which compare differences in data) in favor of regularized methods (which make learning more stable).
      • Reinforcement Learning (RL) in favor of model-predictive control (which plans ahead rather than learning through trial and error).
  2. Limited role of Reinforcement Learning (RL)
    • RL should only be used when predictions fail, to correct the world model or the critic (a system that evaluates decisions).
  3. A Strong Message Against LLMs for AGI
    • He explicitly states: “IF YOU ARE INTERESTED IN HUMAN-LEVEL AI, DON’T WORK ON LLMs.”
    • This suggests that LLMs (like ChatGPT) are fundamentally limited and will not lead to AGI.

LeCun’s approach to AI development has some alignment with SingularityNET’s goals, but there are also key philosophical and technical differences. Let’s break it down:

Where They Align:

  1. Moving Beyond Traditional Deep Learning
    • LeCun argues that LLMs (like ChatGPT) won’t lead to AGI because they lack true reasoning, planning, and world modeling.
    • SingularityNET (SNET) also sees current AI as narrow and lacking general intelligence, which is why they advocate for decentralized AGI that is more dynamic and autonomous.
  2. Importance of World Models & Energy-Based Learning
    • LeCun’s JEPA model (Joint Embedding Predictive Architecture) focuses on learning abstract representations of the world, rather than just predicting the next word in a sentence (as LLMs do).
    • SingularityNET’s OpenCog Hyperon also aims to develop symbolic and connectionist AI that can reason, plan, and build mental models of the world, making it closer to LeCun’s vision than traditional LLMs.
  3. AI Needs to Generalize and Learn from Experience
    • LeCun supports AI that self-improves by interacting with the world, rather than just memorizing data.
    • SNET’s approach also includes self-organizing AI networks, where different AI agents interact and learn collaboratively.

Where They Differ:

  1. Centralized vs. Decentralized AI
    • LeCun’s approach focuses on building better AI architectures (like JEPA) but still assumes these would be developed in centralized research environments (e.g., Meta, large AI labs).
    • SingularityNET is fundamentally decentralized, promoting an open, distributed AI network where many independent AI systems can contribute to intelligence development.
  2. Symbolic AI & Hybrid Methods
    • SingularityNET’s OpenCog Hyperon is heavily based on symbolic AI and logical reasoning, integrating neural networks with structured symbolic models.
    • LeCun is more skeptical of symbolic AI, preferring energy-based models and predictive learning without explicit logic structures.
  3. Market vs. Research Focus
    • LeCun’s JEPA model is a research-driven paradigm, mostly focused on advancing AI architecture.
    • SingularityNET operates in a Web3 ecosystem, where AI services are offered on a blockchain-based marketplace, making it a mix of research and real-world deployment.

Final Take:

LeCun’s critique of LLMs aligns with SingularityNET’s vision for AGI, but SNET’s focus on decentralization and hybrid AI architectures sets it apart.

This article was written using ChatGPT-4, which is based on OpenAI’s GPT-4 model. Specifically, it was generated using the latest available version of GPT-4-turbo, optimized for efficiency and cost-effectiveness while maintaining high-quality outputs.