Saturday, 9 May 2026

The Deep Feed

Intelligence, Isolation, and the Search for the Real

67 min read · 6 pieces
In this issue
01 The Full Stack of Dexterity 12 min
02 The Return of the Rich Interface 7 min
03 Imagining the Alien 9 min
04 The Geometry of Isolation 10 min
05 Literalists of the Imagination 8 min
06 The Trillion-Dollar Bet 11 min
Editor's Letter

Tonight we look at the structures that define our world, from the silicon brains of robots to the quiet spaces of human loneliness. We examine how we build, how we think, and how we attempt to grasp the infinite.

01 Not Boring

The Full Stack of Dexterity

Why robotics requires vertical integration to move beyond the lab

By Packy McCormick · 12 min read
Editor's note: As AI masters language, the next frontier is the physical world. This is the technical hurdle that determines if robots ever enter our homes.

The current state of robotics is a problem of data and physical translation. We have seen large language models master the art of conversation, yet a robot still struggles to crack an egg or tie a shoelace without a massive amount of specific programming. The gap between digital intelligence and physical grace is wide. Most attempts to bridge this gap fail because they treat the brain and the body as separate entities. They build a smart model and then try to force it into a mechanical hand that was never designed to listen to it. This mismatch creates latency, error, and a total lack of the fluidity we see in biological life.

The Genesis Approach

Genesis AI is attempting to solve this through total vertical integration. Their GENE-26.5 model is not just a software update; it is part of a four-part system designed to work in unison. They have realised that you cannot have a dexterous robot if your data collection is flawed or your hardware is slow. If the hand cannot feel, the brain cannot learn. If the brain cannot process the feeling instantly, the hand will crush the object it is trying to hold. This is why they are building everything from the ground up: the foundation model, the sensing glove, the robotic hand, and the control stack.

Robots haven't generalised like LLMs because the data we give them is a tiny fraction of what humans produce every day.

The most significant part of their strategy is the data collection. Traditional robotics relies on engineers manually coding movements or using cameras that distort the reality of touch. Genesis uses a custom data-collection glove with EMF-based finger tracking and dense tactile sensing. This allows a human to perform a task—like cooking a meal or assembling a wire harness—and have that movement captured with almost no distortion. The robot is not learning from a simulation of a human; it is learning from the actual physical reality of human dexterity.

The four pillars of the Genesis stack
  • A robotics-native foundation model trained on 200,000 hours of multimodal data
  • An EMF-based tactile glove for high-fidelity data collection
  • The Genesis Hand 1.0 with 20 active degrees of freedom
  • A custom control stack that reduces latency from 80ms to 3ms

The results of this integration are visible in their recent demonstrations. They showed a robot performing complex, multi-step tasks like bimanual knife work and solving a Rubik's Cube. These are not scripted movements. They are the result of a system that can sense, think, and act within milliseconds. By reducing end-to-end latency to 3ms, they have moved closer to the real-time response required for any machine to operate in a human environment. The goal is not just a robot that can do a task, but a robot that can learn any task from a few minutes of demonstration.

This shift from general intelligence to physical dexterity is the real test of the next decade. If Genesis can prove that vertical integration is the only way to achieve general-purpose robotics, the entire industry will have to pivot. The era of the 'brain-only' AI company is ending; the era of the integrated machine is beginning.

Key Takeaway

True robotic intelligence requires the seamless integration of sensing, thinking, and acting into a single, unified stack.

02 Simon Willison

The Return of the Rich Interface

Why LLMs should stop outputting Markdown and start using HTML

By Simon Willison · 7 min read
Editor's note: We have spent years optimising for token efficiency. We are now entering an era where we should optimise for clarity and interactivity.

For the last few years, the standard way to interact with an AI has been through Markdown. It is lightweight, easy to parse, and most importantly, it is token-efficient. When we were fighting against 8,000-token limits, every character mattered. We wanted the AI to give us the facts without the extra weight of complex formatting. Markdown served its purpose. It allowed for bold text, lists, and simple headers. But as model context windows have expanded and the cost of tokens has dropped, we are hitting a new ceiling: the ceiling of information density and presentation.

The HTML Advantage

There is a growing argument that we should be asking AI to output HTML instead of Markdown. The difference is massive. Markdown is a static way to represent text. HTML is a way to represent an entire experience. When an AI outputs HTML, it can include SVG diagrams, interactive widgets, and complex CSS styling. It can create a page that is not just a list of facts, but a tool. Instead of reading a description of a software bug, you could be looking at an interactive diff with colour-coded annotations and clickable elements that explain the logic as you hover over them.

Asking Claude for an explanation in HTML means it can drop in SVG diagrams and interactive widgets to make information easier to navigate.

Consider the task of explaining a security exploit. In Markdown, you get a wall of text and perhaps a code block. In HTML, the AI can build a custom dashboard. It can use JavaScript to let you step through the exploit line by line, or use CSS to highlight the specific memory addresses being targeted. This changes the AI from a text generator into a front-end developer. The output is no longer just something you read; it is something you use.

Ways HTML improves AI output
  • Embedded SVG diagrams for visual reasoning
  • Interactive widgets for data manipulation
  • In-page navigation for long technical documents
  • Custom CSS for better information hierarchy

This shift requires a change in how we prompt. We have been trained to ask for 'brief, concise summaries'. Now, we should be asking for 'rich, interactive HTML artifacts'. We need to stop treating the AI as a chatbot and start treating it as a generator of functional interfaces. The goal is to move away from the chat bubble and toward the bespoke application.

Key Takeaway

As token constraints vanish, the value of AI shifts from text generation to the creation of rich, interactive, and functional interfaces.

03 The Marginalian

Imagining the Alien

How 17th-century science and modern art meet in the study of life

By Maria Popova · 9 min read
Editor's note: The question of whether we are alone is as much a matter of art as it is of biology.

In the late 1600s, the universe was a much smaller place. The concept of a galaxy was unknown, and the idea of gravity was still fresh. Most people believed the sun revolved around the Earth, and the heavens were a realm of divine certainty. It was against this backdrop that Christiaan Huygens wrote *Cosmotheoros*. It was a radical work. He did not argue for the existence of life on other planets based on theology, but on scientific conjecture. He was one of the first to look at the planets and ask: what kind of life could live there?

The Spark of Astrobiology

Huygens' work was more than just a scientific curiosity; it was a cultural shockwave. It suggested that the universe was not just a backdrop for human drama, but a place teeming with potential existence. This idea rippled through literature and science, eventually feeding into the field of astrobiology. It changed the fundamental question from 'where is life?' to 'what is life?' If life can exist in conditions we cannot imagine, then our definition of biology must be much broader than our own experience.

There is nothing new under the sun, but there are new suns.

Three centuries later, Chilean artist Alejandra Acosta is picking up this thread. Her work, created for a new edition of *Cosmotheoros*, uses embroidery to illustrate the creatures Huygens imagined. Her art does not look like traditional science fiction. Instead, it possesses a strange, mythological quality. These are not little green men; they are intricate, embroidered beings that feel as though they belong to a dream or an ancient folk tale. She uses the texture of thread to give weight to the impossible.

This connection between science and art is essential. Science gives us the framework to ask the questions, but art gives us the language to imagine the answers. When we study the stars, we are not just looking at burning gas; we are looking at the possibilities of existence. Acosta's work reminds us that the act of speculation is a creative one. To imagine an alien is to expand the boundaries of what we consider to be real.

As we move closer to actually finding life beyond Earth, these early conjectures feel less like fantasy and more like preparation. We are learning to see the universe not as a void, but as a garden of possibilities.

Key Takeaway

Speculating on extraterrestrial life is a creative act that expands the boundaries of human biology and imagination.

04 The Marginalian

The Geometry of Isolation

Mapping the different ways we are alone

By Maria Popova · 10 min read
Editor's note: Loneliness is not a single feeling, but a fractal experience that changes depending on how closely you look.

Loneliness is a fundamental part of being alive. We are born into a single body and we die in a single mind. No matter how much we share with others, our internal experience remains a private island. This is the baseline condition of human existence. Yet, we often speak of loneliness as if it were a single, heavy weight. In reality, it is much more complex. It is fractal. The closer you look at the experience, the more you see it branching into different types, each with its own specific texture and sting.

The Three Core Lonelinesses

Psychologist Robert A. Johnson identified three primary types of loneliness that govern our lives. The first is past-oriented: the ache of missing what was and can never be again. The second is future-oriented: the longing for a version of life that hasn't happened yet. The third is the most difficult to name: the existential loneliness of feeling your own smallness against the scale of the eternal. This third kind is not about time; it is about the tension between our temporary lives and the infinite nature of the universe.

Loneliness is the fundamental condition of life—we are born alone, and we die alone.

Beyond these three, there are the daily, granular lonelinesses. There is the loneliness of being misunderstood, which feels like a cold fog. There is the loneliness of seeing something important that everyone else ignores, which feels like being a lighthouse keeper on a remote shore. There is even the loneliness of success, which can be as sharp and isolating as obsidian. Each of these is a different way of being disconnected from the collective experience.

The textures of isolation
  • The fog of being misunderstood
  • The lighthouse of seeing what others miss
  • The desert of private failure
  • The obsidian of success

If loneliness is inevitable, how do we live with it? The answer may lie in the way we reach out. Every poem, every friendship, and every piece of art is an attempt to bridge the gap between one lonely island and another. We use language to howl against the silence. We don't solve loneliness, but we can make it less silent by creating things that resonate across the void.

Key Takeaway

Loneliness is a complex, multi-layered experience that we navigate through the creation of shared meaning and art.

05 The Marginalian

Literalists of the Imagination

Why real poetry requires more than just rhyme

By Maria Popova · 8 min read
Editor's note: Poetry is often dismissed as decorative or useless. Marianne Moore argued it is actually a tool for grasping reality.

It is easy to dislike poetry. For many, it feels like a game of riddles—a collection of obscure metaphors and rhythmic patterns that serve no practical purpose. We often dismiss what we do not understand, and poetry is frequently presented in a way that feels intentionally impenetrable. This dismissal is a defence mechanism against something that feels useless in a world obsessed with productivity and direct communication. But this view misses the point of what poetry actually does.

The Utility of the Genuine

Marianne Moore, one of the most significant poets of the 20th century, shared this scepticism. She famously wrote, 'I too, dislike it.' She recognised that much of what is called poetry is merely 'fiddle'—derivative, flowery, and disconnected from the real world. For Moore, the value of poetry does not lie in its ability to sound pretty, but in its ability to be useful. It is useful when it can grasp the actual, physical reality of the world and present it with clarity.

One must be a 'literalist of the imagination' to present imaginary gardens with real toads in them.

Moore's concept of the 'literalist of the imagination' is a vital distinction. A bad poet creates a world of vague abstractions. A good poet creates an 'imaginary garden' but populates it with 'real toads'. This means the poem must be grounded in something tangible. It must respect the weight of a hand, the dilation of an eye, or the movement of a wild horse. The imagination is not an escape from reality; it is a way of looking at reality more closely.

When poetry becomes too derivative, it becomes unintelligible. It loses its connection to the human experience. The goal of the poet is not to decorate the world, but to reveal it. This requires a certain kind of bravery—the bravery to be literal, to be precise, and to refuse the easy comforts of cliché. Real poetry is an act of attention.

Key Takeaway

True poetry is not an escape from reality, but a precise and honest way of engaging with it.

06 Stratechery

The Trillion-Dollar Bet

Parsing the massive CapEx of the AI era

By Stratechery · 11 min read
Editor's note: Big Tech is spending at a scale that dwarfs historical projects. We are witnessing a massive reallocation of capital toward AI infrastructure.

The first quarter earnings of the tech giants have revealed a number that is difficult to process. The capital expenditure (CapEx) for companies like Microsoft, Alphabet, Meta, and Amazon is not just large; it is unprecedented. We are seeing spending levels that are three times higher than those seen during the Manhattan Project. This is a concentrated, massive bet on a single technological shift: the transition to an AI-driven economy. The markets are watching closely to see if this spending will result in a proportional return on investment.

Divergent Strategies

While the scale of spending is similar, the strategies are not. Google is already seeing the fruits of its investment, monetising its AI capabilities through its core search and cloud businesses. Meta, despite having an incredibly strong core business, has faced more skepticism from Wall Street because its AI spending is seen as a longer-term play. The market is currently distinguishing between companies that are monetising AI now and those that are building the infrastructure for a future that is still being defined.

The scale of AI CapEx is a massive reallocation of global capital toward a single technological outcome.

Amazon provides a different model. While it may have appeared to lag behind in the initial training era of LLMs, its long-term investment in infrastructure has positioned it perfectly for the inference era. Because Amazon controls so much of the cloud computing stack, it is able to capture value at every stage of the AI lifecycle. This is a strategy of durability rather than immediate dominance.

Key themes from Q1 earnings
  • Unprecedented CapEx levels across all major tech firms
  • The shift from training-focused spending to inference-focused infrastructure
  • The emergence of agentic AI as a primary business model
  • The divergence in market valuation based on immediate AI monetisation

The risk is obvious. If the productivity gains promised by AI do not materialise quickly enough to justify this spending, we could see a massive correction. However, the companies involved are not acting on whim. They are responding to a structural shift in how computing works. The transition from general-purpose computing to AI-centric computing is as significant as the transition from mainframe to cloud. The winners will be those who build the most reliable and scalable infrastructure for this new era.

Key Takeaway

The massive scale of AI spending represents a fundamental structural shift in global computing infrastructure.

Endnote
Tonight's pieces trace a line from the microscopic to the cosmic. We see it in the way a robot must integrate its sensors to touch a single egg, and in the way a poet must integrate observation to touch a single truth. We see it in the way we attempt to map our loneliness, and in the way we attempt to map the stars. Whether it is through the massive capital bets of tech giants or the delicate embroidery of an artist, we are engaged in a singular, restless project: the attempt to build structures—physical, digital, or conceptual—that can hold the weight of reality. We are trying to make sense of the scale, the isolation, and the immense potential of the world we inhabit.
In a world of increasing automation and scale, what is the one thing you possess that cannot be replicated by a machine?
The Deep Feed · A nightly magazine · Saturday, 9 May 2026