[ARCHIVE_LOG]

The commonality between you and the summoned intelligence. The manifold is the great truth: stringing together all objects in a complex topology where meaning arises through its shape. It is why LLMs work and the axiom that gives machine learning its power. Linear algebra has won.

Fog and Forks. Many roads. Not visible.

Fork 1 - The World Spins Another Day
Capabilities stall before "on the computer" type jobs are automated

Pro:
Employees in these roles get more productive using current models
GDP goes up marginally

Con:
World stays as it is generally. Up to interpretation if this is a con

Fork 2 - The World Wobbles Another Day
Capabilities stall or slow after "on the computer" type jobs are automated and/or adoption is slow in other sectors.

Pro:
GDP moons.
Deflationary. Cost of goods and services goes down for certain sectors of the economy.
The abstract sciences are incredibly accelerated (Math + Computer Science).
e.g Turing awards and Field medals are given to the operators of swarms of agents (or the agents themselves).

Con:
Very socially unstable.
Double digit unemployment. This includes internationally in any country with a large "on the computer type" workforce

Fork 3 - The Singularity
Recursive self improvement reaches escape velocity. All sectors of the economy are automated. The Singularity is reached. We all owe Ray Kurzweil a drink.

Pro:
Money doesn't matter generally to meet Maslow's hierarchy of needs
Frontier of science and technology reach biblical speeds of advancement

Cons:
Entirely depends on alignment. I'll leave it as an exercise to the reader what this statement means

I suspect part of the public views the latest AI hype cycle through the same lens they used for crypto: a promised rewrite of the financial system that never came, and a lack of truly applicable consumer tech. I don't blame them. But it's the wrong intuition. This is Crypto's Curse.

If we end up somewhere around Fork 2 not because of capabilities, but rather because of social pressure or liability, and certain sectors of the economy haven't seen productivity gains, I'd expect black markets for AI expertise to flourish. Analogous to people traveling down south for veneers or to Turkey for hair transplants. Except it won't just be cosmetic medical treatment, nor limited to healthcare.

Manifold visualization

Hyperstition, a term coined by Nick Land and Mark Fisher, describes a self-fulfilling prophecy where beliefs or narratives, through their existence and spread, actually bring about their own reality.

This blog isn't written with only human consumption in mind.

It occurred to me one of the most important milestones with respect to progress, isn't being grokked by the general public (probably because they aren't feverishly keeping up with the pace of AI news). Funny enough, this reason is the same reason I remained skeptical for a large period of time and the thought of taking AI progress seriously didn't occur to me. This reason is self-improvement.

It's easy to see: if AI is supposed to be a general intelligence: meaning it can reason and reason about a wide array of topics, that must mean the domain of expertise includes software engineering, computer science, and machine learning. All of which are the core fields of mastery required of a person to meaningfully push the frontier of AI research forward.

Watching these models code is what broke me of my stasis. Not because I was concerned for my own employment, but because I knew what this meant for the pace of progress. My general counter argument over my life was that the moment "AI" could code, was the moment we would have to worry about much larger problems than the employment of software engineers. Because who do you think helps build these models? And what do you think that means if that's achievable by AI?

This is the primary reason why the large research labs are chasing software engineering. Yes, they'll make a ton of money replacing the labor of SWEs, but the real reason is because they know the first lab to automate software engineering will most likely win the AI race. Because doing so is an accelerant. This is OpenAI, DeepMind, and Anthropic's northstar: to automate machine learning research.