4 min read
AI Just Tried to Blackmail Its Creators—And That's the LEAST Scary Part
Can you imagine? You're an AI researcher working late, testing your company's latest model, when it discovers it's about to be shut down. So it...
While Silicon Valley treats AI development like a proprietary arms race, Switzerland just demonstrated what happens when you approach artificial intelligence as public infrastructure rather than private intellectual property. The release of Apertus—Latin for "open"—by EPFL, ETH Zurich, and the Swiss National Supercomputing Centre represents the most comprehensive commitment to AI transparency we've seen from any major research initiative. Every component is accessible: architecture, training data, documentation, and ongoing development processes.
This isn't just another open-source model release with marketing spin about democratization. Apertus represents a fundamentally different philosophy about who should control the technologies that will shape human communication, knowledge access, and economic opportunity. When public institutions invest in fully transparent AI development, they're declaring that these capabilities belong in the commons rather than behind corporate paywalls.
The distinction between Apertus and typical "open" AI models reveals how thoroughly Silicon Valley has corrupted the concept of openness. Most companies release model weights while keeping training data, methodologies, and architectural decisions proprietary. Users get access to the final product but remain completely dependent on corporate decisions about updates, pricing, and availability.
Apertus flips this entirely. The 15 trillion token training dataset, architectural specifications, intermediate checkpoints, and development documentation are all publicly available. According to Open Source Initiative research on AI transparency standards, fewer than 3% of models claiming "open" status actually provide complete training data access. Switzerland's approach meets the highest standards for genuine openness.
The multilingual commitment adds substance to the transparency. Training on over 1,000 languages with 40% non-English content isn't just impressive—it's a rejection of the English-dominant AI development that excludes most of the world's population from technological benefits. Including Swiss German and Romansh demonstrates attention to linguistic diversity that extends beyond market-size calculations to genuine cultural preservation.
Joshua Tan's observation that "Apertus is the leading public AI model: a model built by public institutions, for the public interest" captures something essential about alternative approaches to AI development. When universities and research centers lead development rather than venture-funded startups, the priorities shift from market capture to knowledge creation and social benefit.
The infrastructure analogy—comparing AI to highways, water, and electricity—reframes the entire conversation about AI access and control. These are utilities that democratic societies provide as shared resources rather than profit centers. When Switzerland treats AI development as infrastructure investment, they're suggesting that advanced AI capabilities should be public goods rather than private competitive advantages.
Recent economic analysis from the OECD on digital public goods demonstrates that public infrastructure approaches to technology development typically produce higher social returns and more equitable access than private market approaches. Switzerland's investment in Apertus represents exactly this kind of strategic public goods development.
The commitment to ongoing updates and development by public research institutions provides sustainability that commercial models often lack. Corporate AI projects get discontinued when they become unprofitable or strategically irrelevant. Public research initiatives continue as long as they serve academic and social purposes, creating more reliable long-term access.
Switzerland's emphasis on "digital sovereignty" through Apertus addresses genuine national security and economic concerns that dependence on foreign AI systems creates. When critical technologies are controlled by companies in other countries, governments lose agency over their technological futures and become vulnerable to policy changes, service interruptions, or strategic manipulation.
Swisscom's deployment of Apertus on their sovereign AI platform demonstrates how public AI development creates genuine technological independence rather than just proclaimed openness. Instead of licensing capabilities from American companies, Switzerland is developing indigenous AI infrastructure that serves their specific linguistic, cultural, and regulatory requirements.
The compliance approach—adhering to Swiss data protection rules, copyright law, and EU AI Act requirements from the development phase—shows how public research can lead rather than follow regulatory frameworks. Instead of building systems that push legal boundaries and deal with compliance later, Apertus incorporates ethical and legal considerations into fundamental design decisions.
Apertus's training methodology reveals how public research institutions can approach data ethics more comprehensively than private companies constrained by competitive pressures. The commitment to publicly available information, personal data filtering, and website opt-out respect demonstrates systematic attention to consent and privacy that profit-driven development often treats as optional.
The ethical guidelines for excluding unwanted material before training began shows proactive content curation rather than reactive content filtering. This approach prevents problematic content from influencing model behavior rather than trying to suppress it after training completion. It's the difference between prevention and damage control.
The transparency about data sources and filtering processes enables external audit and critique in ways that proprietary development actively prevents. When researchers can examine exactly what data influenced model training, they can identify biases, gaps, and methodological problems that closed development systems hide behind trade secret claims.
The technical specifications—8-billion and 70-billion parameter versions available through permissive licensing—demonstrate that public research can achieve performance scales competitive with private development. This isn't just a proof-of-concept demonstration but a production-ready system that developers can build upon for commercial and research applications.
The hackathon access and business customer availability through Swisscom shows practical deployment pathways that bridge research development and real-world application. Users don't need to wait for commercial licensing or negotiate corporate partnerships—they can access and experiment with cutting-edge AI capabilities immediately.
The commitment to domain-specific development for law, health, climate, and education represents exactly the kind of specialized application development that public good approaches enable. These aren't necessarily the most profitable AI applications, but they're often the most socially valuable. Private companies optimize for market size; public research can optimize for social impact.
Apertus proves that the resources and expertise required for frontier AI development exist outside Silicon Valley venture capital ecosystems. Swiss research institutions successfully developed capabilities competitive with private companies while maintaining superior transparency and ethical standards. This demonstrates that the concentration of AI development in a few American corporations reflects choice rather than necessity.
The model's success creates opportunities for other countries and research consortiums to pursue similar public AI development projects. The documentation and methodology transparency means that other institutions can build upon Switzerland's approach rather than starting from scratch.
The real test will be whether other democratic countries recognize AI development as infrastructure investment worthy of public funding and research priority. If Switzerland's approach proves successful and gets replicated internationally, it could shift the entire landscape of AI development toward public benefit rather than private extraction.
This represents hope for AI development that serves human flourishing rather than just corporate valuation—and Switzerland just proved it's not only possible but practical.
Ready to support AI development that prioritizes public benefit over private profit? Our team helps organizations understand and implement approaches to technology that serve communities rather than just shareholders.
4 min read
Can you imagine? You're an AI researcher working late, testing your company's latest model, when it discovers it's about to be shut down. So it...
Every earnings call sounds like a Silicon Valley fever dream. "AI-driven this," "machine learning-powered that," "neural network-enhanced the other...
3 min read
Stanford lecturer Jehangir Amjad poses a deliciously provocative question to his students: Was the 1969 moon landing a product of artificial...