In this second half of our two part post, we look at what the opportunities and challenges are as AI evolves. Read part 1 here.
Back in the early years of the world wide web, everyone thought it was all about “portals” like Hotbot, Yahoo and AltaVista.
They delivered stocks news, weather, sports AND search. It wasn’t until Google blew them away with better technology and a key insight that it was critical to get people OFF their website as soon as possible, by delivering a good search result, that everything changed. Essentially overnight in business terms, all the competitors went the way of the horse and cart.
So with AI, the jury is still out on whether the large-model companies (OpenAI, Google, Meta, Inflection) and the startups building on top of them, will win out. Clumsy analogy it may be, but as Rory O'Driscoll of Scale Ventures said recently “No one knows nothing right now”.
So it's possible the defensive moat of today’s leaders is more fragile than we assume, especially given the amount of work being done in the open-source domain.

With go-to-market timelines compressing rapidly, companies are now able to go from research to billion-dollar products in less than 12 months. There has been a doubling in the number of new foundation models released in the past year alone with 65% of these being open-source.
Transformation Beyond Transformers
We’re also incredibly early in the drive towards artificial ‘super intelligence’ - an AI that surpasses human intelligence in all aspects. Though here things become murky. Surpassing all human knowledge has arguably been achieved, as most of the major LLMs will know more (cribbed from the internet) than any human possibly can. But while rapid advance toward next generation AI is inevitable, transformers can only “see” a fixed number of tokens at a time, with longer contexts coming with exponentially greater compute costs, and some would argue that while LLMs will probably continue to sound smarter and smarter, LLMs will never reach truly human level “intelligence” (depending, of course, on how you define intelligence).
“As the amount of training data increases and models get larger, it becomes harder to differentiate between memorization versus real learning or intelligence. From a product standpoint, this may not matter, but it will cause huge frustration as soon as you ask about something the model wasn't trained on, but could be worked out with common sense.” says Zehan Wang, CEO of Paddington Robotics.
A huge part of the problem is that common sense involves things that we take for granted, and often don’t write down in the content that AI models are trained on.
A lack of native memory, overall interpretability and high retraining costs, present opportunities for new architectures like State Space Models (SSMs), Mixture of Experts (MoEs), Neural Module Networks (NMNs), and Neurosymbolic Architectures.
“I don’t think we’ve yet seen the technology which will take us to artificial general intelligence, true AGI” says Andrew J Scott, 7percent’s Founding Partner.
A centralized, data-center-focused model of development may also prove simply too limited for the scale required. Decentralized systems being developed by GensynAI, may provide a solution to the vast compute capacity needed.

Embodied AI and robotic systems will need on-the-fly learning, to create compelling, useful robots which learn and adapt in real time based on the world around them.
Ben Fielding suggests that “...we’ll likely move past benchmarks as the standards for assessment and over to pure productisation…the big labs are still competing on benchmarks as a customer acquisition tool, but they don't matter. What matters is usability and whoever acknowledges that first and doubles down hard will win more users.”
New Physical Limits
The progress of computing until now has been facilitated by the improvement in computer processing power.
In 1965 Moore’s law predicted that computing power would double every two years, driven technically because the number of transistors on a microchip was increasing so rapidly. Remarkably this has held broadly true for half a century. Increasingly powerful mobile devices and the falling cost of technology in general, has all depended on this progress along with increased storage capacity (Kryder’s law), data transmission (e.g., Butters’ law) and broader economies of scale (Wright’s Law).
But as transistors reach atomic-scale, they are nearing their minimum feasible size which physics will allow. IBM’s current experimental transistors are only around one order of magnitude larger than the silicon atoms that are used to make them!
In addition, with today’s architectures using logic gates that destroy one bit of information in every operation (with a subsequent release of heat/energy) the Landauer limit places a hard physical floor on how energy-efficient the existing semiconductor architectures can be.
There is now heavy investment into sticking a bandage over this energy problem, with a raft of startups focused on novel cooling solutions or energy solutions that may provide iterative improvements. But these are unlikely to be the big winners of the next decade, dependent as they are on the existing architecture.
For new entrants to win, they must deliver a real step change in capability, not incremental progress. It’s very hard for example, to beat Nvidia on GPUs for training models (using the CUDA ecosystem to fight off competition from an increasingly well funded open source ecosystem).
And today’s hardware is expensive: Meta’s 2023 LLaMA model was trained on $30m of GPU hardware. Training models and re-training them for certain tasks is currently the most capital-intensive part of the revolution.
Fundamentally new hardware approaches (analog, photonic, reversible, neuromorphic, quantum, biological and 3D chips) are soon to disrupt the status quo. Leveraging scientific breakthroughs - materials such as graphene and silicon carbide which both have unique properties - could enable the creation of smaller transistors or entirely new types of chip. The big winner(s) will be those who transcend the current stack.

However, hardware alone is not enough. Innovators will have to bridge the gap between their hardware and the software which will use it, in order to compete.
Funding in AI chips and processors has been dominated by the US (45%) and China (27%), with Europe accounting for just 6% of investment in the sector.
The UK and Europe also missed the starting gun for AI, despite having decades of academic research in the field. And the clock is now ticking before it's out of time to catch up.
Europe and the UK must focus on areas of excellence, in quantum, next gen computer and AI, to avoid the US owning the next digital decade, as it has dominated the internet, cloud storage, search, email and thus far, AI.

Technology Is The Future Of Everything
How and if we are able to solve the hardest problems of humanity, from the curing of disease to climate change, will hinge on the success of these new technologies. The companies providing that technology will become the drivers of the global economy.
For society, technological resilience means investing heavily in technologies that span the compute stack, but also the software and hardware infrastructure that supports it. Little point having a data centre if someone can take out your power supply with a software hack.
We must retool for the coming age of embodied AI. The intelligence that is moving off our computer screens and into physical world robotics, is a kind that we interact with on a day to day basis. The leadership by free democratic nations in these fields is critical to avoiding long term economic decline, but also societal manipulation or loss of liberty.
AI represents a potential challenge to our existing way of life, from fake news or financial scams to future robots gone rogue, it’s vital we get to set the regulations to ensure our safety. That’s difficult to do if you're beholden to others.
We are witnessing a tidal wave of ambitious founders tackling these problems at the intersection of compute and AI. Those breakthroughs offer the most visionary investors, opportunities for outsized returns and to help shape the society of the future.
At 7percent, over 25% of our investments have been made as part of our Future Compute thesis, focused on next-generation architectures and foundational intelligence, with portfolio companies including Gensyn, Nu Quantum, Magic Pony Technology (Exit to Twitter), Plumerai, Universal Quantum, Vaire, and (most recently) Paddington Robotics.
Special thanks to Ben Fielding of Gensyn and Zehan Wang of Paddington Robotics for adding their insights.