<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Loeber on Substack]]></title><description><![CDATA[On interactions between people, markets, and technology.]]></description><link>https://essays.johnloeber.com</link><generator>Substack</generator><lastBuildDate>Sun, 19 Apr 2026 16:32:44 GMT</lastBuildDate><atom:link href="https://essays.johnloeber.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[John Loeber]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[loeber@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[loeber@substack.com]]></itunes:email><itunes:name><![CDATA[John Loeber]]></itunes:name></itunes:owner><itunes:author><![CDATA[John Loeber]]></itunes:author><googleplay:owner><![CDATA[loeber@substack.com]]></googleplay:owner><googleplay:email><![CDATA[loeber@substack.com]]></googleplay:email><googleplay:author><![CDATA[John Loeber]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[#32: Contra Citrini7 (Repost)]]></title><description><![CDATA[I have recently begun publishing shorter essays on X/Twitter. My critique of Citrini7&#8217;s essay &#8220;2028 Global Intelligence Crisis&#8221; from one week ago received good pickup there, so I&#8217;m re-publishing it for you here. Feel free to skip if you&#8217;d already seen it. In due time, all my X essays will also be re-published here on Substack.]]></description><link>https://essays.johnloeber.com/p/32-contra-citrini7-repost</link><guid isPermaLink="false">https://essays.johnloeber.com/p/32-contra-citrini7-repost</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 01 Mar 2026 19:06:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9024f5a6-27ac-4cc8-b232-bef895b7d9af_1654x852.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>I have recently begun publishing <a href="https://x.com/johnloeber/articles">shorter essays on X/Twitter</a>. My critique of Citrini7&#8217;s essay &#8220;2028 Global Intelligence Crisis&#8221; from one week ago received good pickup there, so I&#8217;m re-publishing it for you here. Feel free to skip if you&#8217;d already seen it; there are no changes to the content. In due time, all my X essays will also be re-published here on Substack.</em></p><div><hr></div><p>Popular markets commentator Citrini7 recently published a compelling and popular <a href="https://substack.com/home/post/p-188821754">piece of AI doomer fiction</a>, admittedly with some small probability of occurring. But I am old enough to have seen many cycles of economic doomsaying. I want to present a critique of Citrini&#8217;s work and show a much likelier, more positive view of the future.</p><h3><strong>1. Never Underestimate Institutional Momentum</strong></h3><p>In 2007, people thought the US was geopolitically done under peak oil. In 2008, they thought the US dollar was just shy of collapse. In 2014, they thought AMD and NVIDIA were done. Then came ChatGPT, and they thought Google was done... Every time, existing institutions with momentum have proven themselves far more durable than onlookers thought.</p><p>When worried about institutional turnover and rapid labor replacement, it is very funny that Citrini writes:</p><blockquote><p>Even places we thought insulated by the value of human relationships proved fragile. Real estate, where buyers had tolerated 5-6% commissions for decades because of information asymmetry between agent and consumer...</p></blockquote><p>People have been calling for the end of the real estate broker for 20 years! You don&#8217;t need superintelligence for this! All you need is Zillow or Redfin or Opendoor. This example actually shows the very opposite of Citrini&#8217;s point: we have a type of labor that most people consider obsolete, and yet, market inertia and regulatory capture have made the real estate broker <em>far more resilient </em>than anyone would&#8217;ve bet a decade ago. </p><p>My wife and I bought a house a few months back. The transaction required us to have an agent, ostensibly for the above reasons. Our buyer&#8217;s agent made about $50,000 on the deal, for about ten hours of form-filling and party-coordination that I could&#8217;ve done myself. This market will <em>eventually</em> be efficient and price this labor fairly, but it takes a long time to get there. I know a lot about inertia and change management: I built and sold a company that focused on moving insurance brokerages from <em>service</em> to <em>software</em>, and the main thing I learned is the <strong>iron rule of dealing with human reality:</strong> everything is always more complicated and takes much longer than you think it will, even if you already know about the iron rule. <em>That doesn&#8217;t mean that a meaningful change in the world won&#8217;t happen, but that the change will be more gradual, giving us the time to respond and adjust.</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://essays.johnloeber.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>2. Software Has Infinite Demand for Labor</strong></h3><p>The software sector has been struggling in recent months as investors fear that companies like Monday, Salesforce, Asana, etc. can now be easily replicated and that the value of their backend systems is indefensible. Citrini and others talk of AI coding as spelling the end of jobs at SaaS companies as (1) the products become obsolete/zero-margin and (2) the jobs themselves disappear.</p><p>What everyone seems to be missing is this: <strong>these products fucking suck</strong>. I can say this, because I&#8217;ve actually spent hundreds of thousands of dollars on Salesforce and Monday. Sure, maybe AI enables competition to replicate their products. But more importantly, AI enables competition to <em>deliver better products</em>. It&#8217;s no surprise to see the stocks drop: an uncompetitive, sticky lock-in sector filled with dogshit incumbents is becoming competitive again. </p><p>More generally, it is uncontroversial that <strong>virtually all current software is garbage</strong>. Everything I use and pay for is littered with bugs. Some software is so broken that I can&#8217;t even pay for it. I have not been able to send a wire using Citibank&#8217;s online banking in three years. Most web apps can&#8217;t even get mobile vs. desktop right. Nothing has the functionality that you want. Everything is deficient. Silicon Valley darlings like Stripe and Linear have built massive followings just by <em>not being as insanely unusable and horrendous as their competitors. </em>Ask tenured engineers &#8220;show me a piece of good software&#8221; and you&#8217;ll get long silences and blank stares.</p><p>There is a deep and important truth here: even if we get something like the <a href="https://x.com/tbpn/status/2019585837038809228?s=20">Software Singularity</a>, the level of demand for labor here is practically infinite. Famously, it is the last few percent of completion that take the most work, and by that token, virtually every software product could probably scale up its complexity and features by something like 100x before beginning to saturate demand. </p><p>I have the feeling that commentators on the imminent demise of software don&#8217;t have much intuition for <em>making software</em>. We&#8217;ve had software for about fifty years now. Though it has improved meaningfully over the years, it has always been inadequate. As a programmer in 2020 I was able to do what would&#8217;ve taken hundreds of man-years in 1970; the leverage gained is incredible, but the results still leave massive space for improvement at every step along the way. People underestimate <a href="https://en.wikipedia.org/wiki/Jevons_paradox">Jevons Paradox</a>. Importantly, this does not mean that software engineering is a forever-resilient source of jobs. Of course not; nothing is. But my point is that again, <em>the sector has more momentum and ability to absorb labor than people give it credit for, and saturation of this will be a slow process, giving us time to respond and adjust.</em></p><h3><strong>3. Re-Industrialization</strong></h3><p>There will be some labor displacement, of course. Driving stands out. Many types of white-collar work, as Citrini suggests, will undergo some gyration as some jobs disappear and others change meaningfully. AI may be the straw that breaks the camel&#8217;s back for jobs like the real estate broker, where the job had actually already disappeared a long time ago, but the pay was still there.</p><p>The saving grace here is that in the US, we have a virtually limitless capacity and need for re-industrialization. You may have heard about bringing back manufacturing, but it&#8217;s more than that: we largely no longer know how to create, and don&#8217;t have the facilities for, making the core building blocks of modern life: batteries, motors, small semiconductors &#8212; <a href="https://www.notboring.co/p/the-electric-slide">the whole electric stack</a> is something we are almost entirely dependent on China and other countries for. What if there&#8217;s ever a military confrontation? Actually, it&#8217;s much worse than that: did you know China makes 90% of the world&#8217;s ammonia? If there&#8217;s a war, we can barely make fertilizer. We&#8217;d just starve.</p><p>Once you start looking at the physical world, you see a <strong>virtually endless scope</strong> for work on job-creating, nation-benefiting, fundamental infrastructural work that is politically bipartisan. </p><p>We&#8217;ve seen the economic and political milieu slowly make their way in this direction &#8212; talking about re-industrializing, manufacturing, deep tech, American dynamism, and so forth. My prediction is that as AI challenges white-collar labor, the political path of least resistance will be funding large-scale re-industrialization in the form of <strong>employment megaprojects</strong> which, thankfully, are not subject to a singularity but rather move at the friction-heavy speed of getting things done in the physical world. We&#8217;ll build bridges again. People will find it gratifying to see the fruits of their labor in the real world, not in digital abstractions. The Senior PM at Salesforce that loses their $180K job might find a new job in the field at the California Desalination Works, to finally, finally, end the 25-year drought. And it shouldn&#8217;t be <em>good enough</em>, but <em>excellent</em>. And once it is built, it must be maintained! Once more, Jevons Paradox can apply, if you allow it to.</p><h3><strong>4. And Beyond</strong></h3><p>The outcome of industrial megaprojects is of course that we move toward abundance: America will once more be independent, and make things at large scale and low cost. Transcending material scarcity is the key: in the long run, if we do lose almost all the white-collar jobs to AI, we have to be able to provide people with a continued high quality of life. Part of this we get automatically, just because AI taking margins to zero means that those consumer products will become equivalently cheap.</p><p>My view is that different parts of the economy will &#8220;take off&#8221; at varying speeds, and virtually all the areas are slower than a piece like Citrini&#8217;s might suggest. To be clear, I am extremely bullish on AI, and expect that one day, my labor too will be obsolete. But it&#8217;s going to take a while to get there, and that time gives us the opportunity to make good policy.</p><p>On that front, preventing a market meltdown the way Citrini imagines is actually pretty easy, and the federal government&#8217;s response during Covid showed how proactive and aggressive it is willing to be. I&#8217;d expect large-scale stimulus to kick in quickly once needed. It slightly irks me to say that it won&#8217;t be efficient, but that&#8217;s also not the point. The point is material prosperity for people in the course of their lives &#8212; broad consumer well-being that legitimizes the state and carries forth the social contract &#8212; not satisfying the accounting metrics or economic norms of the past. If we are nimble and responsive to this slow but sure technological revolution, then we will be fine.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[#31: Open-Source Software in the Age of AI]]></title><description><![CDATA[The future will have a lot more open-source software, written by machines but funded by people, giving better experiences and greater freedom to the consumer.]]></description><link>https://essays.johnloeber.com/p/31-open-source-software-in-the-age</link><guid isPermaLink="false">https://essays.johnloeber.com/p/31-open-source-software-in-the-age</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Fri, 06 Feb 2026 04:20:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/dd48b76a-5f81-4eb9-afea-6ae6b4a569c3_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everything I believed about software has changed rapidly over the past few years. AI tools like Codex, Cursor, and Claude Code are making development vastly faster and more accessible. The value of traditional software businesses is cratering,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> and the far-and-away most important software assets are no longer code, but model weights.</p><p>These changes also extend to <a href="https://en.wikipedia.org/wiki/Open-source_software">open-source software</a>, our great collective project that underpins technology everywhere. Over the last few weeks, I&#8217;ve seen projects like <a href="https://github.com/tldraw/tldraw">tldraw</a> change <strong>how open-source software is written</strong>, and <a href="https://github.com/openclaw/openclaw">openclaw</a> change <strong>how it is utilized</strong>. These are important changes, not just for open-source aficionados like myself. In this blog post, I&#8217;ll make some predictions: the future will have a lot more open-source software, written by machines but funded by people, giving better experiences and greater freedom to the consumer.</p><h2>Creation of Open-Source Software</h2><p>The term <em>open-source</em> means the source code of such software is published for anyone to read, modify, and redistribute. This key tenet provides several important benefits:</p><ul><li><p>Trust: anyone can audit the software and raise issues;</p></li><li><p>Free as in freedom: anyone can edit their copy of the software as they please;</p></li><li><p>Collaboration: anyone can publish their modified versions of the software;</p></li><li><p>Free as in beer: it doesn&#8217;t cost any money!</p></li></ul><p>This puts open-source software (OSS) in a neat intersection of the personal and the professional. People contribute to it in areas of their interest; OSS is a patchwork of passion projects. While those engineers are not doing it for money, the experience gained by this work, and the respect it confers, becomes a professional advantage. </p><p>Many junior engineers, myself once included, took their first steps into &#8220;real&#8221; engineering by making small contributions to open-source projects. But today, many open-source projects are considering closing themselves to contributions:</p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/mitchellh/status/2018458123632283679&quot;,&quot;full_text&quot;:&quot;I've been doing open source since I was a teenager (over 20yrs). And for the first time ever, I'm considering closing external PRs to my OSS projects completely. This will throw the baby out with the bathwater and I hate that, but we close auto-opened slop PRs every single day.&quot;,&quot;username&quot;:&quot;mitchellh&quot;,&quot;name&quot;:&quot;Mitchell Hashimoto&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1141762999838842880/64_Y4_XB_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-02T22:54:52.000Z&quot;,&quot;photos&quot;:[],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:41,&quot;retweet_count&quot;:14,&quot;like_count&quot;:457,&quot;impression_count&quot;:21327,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:false}" data-component-name="Twitter2ToDOM"></div><p>As the author of tldraw <a href="https://tldraw.dev/blog/stay-away-from-my-trash">explains</a>, AI coding tools are leading to a proliferation of very inexperienced developers using AI to generate massive, poorly-formed PRs (requests to contribute code to a project), which drain time and attention from the maintainers, who need to review those requests. </p><p>Previously, low-quality contributions had always been an issue, but they were kept in check by the fact that writing them still took significant time and effort, much more than reviewing them. For tldraw, <a href="https://x.com/PradyuPrasad/status/2018142579859296548">Pradyumna Prasad</a> ran the numbers: PRs by external contributors went from 90% acceptance in 2021 to 43% in 2025 &#8212; nearly a 6x increase (10% &#8594; 60%) in bad submissions, an overwhelming quantity.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!i9Uh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!i9Uh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 424w, https://substackcdn.com/image/fetch/$s_!i9Uh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 848w, https://substackcdn.com/image/fetch/$s_!i9Uh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!i9Uh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!i9Uh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg" width="1456" height="731" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:731,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:146405,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://loeber.substack.com/i/186595943?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!i9Uh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 424w, https://substackcdn.com/image/fetch/$s_!i9Uh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 848w, https://substackcdn.com/image/fetch/$s_!i9Uh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!i9Uh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8a5895b2-6870-4341-a768-fa37efbf085a_2770x1390.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There are a few clear ways to address this problem.</p><ul><li><p>Projects could use AI tools like <a href="https://www.greptile.com/">Greptile</a> to automatically review PRs, and stem the tide of junk. But those cost money, which OSS projects usually don&#8217;t have.</p></li><li><p>Projects can do some gatekeeping and set a more requirements-heavy process for first-time contributors.</p></li><li><p>Projects could close their doors to code contributions altogether and only accept monetary contributions.</p></li></ul><p>The third bullet here might sound crazy, but it&#8217;s not that far off. Many skilled engineers <a href="https://x.com/tszzl/status/2015262304913469808?s=20">already</a> <a href="https://news.ycombinator.com/item?id=46835618">report</a> <a href="https://antirez.com/news/159">basically</a> no longer writing code, but just interfacing with AI to define the job to be done, and then letting the AI run with it. This means that the leverage provided by <em>money</em> (to purchase AI tokens) on top of the time of a skilled developer is growing significantly. SemiAnalysis reports that <a href="https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point">already 4% of public GitHub commits</a> are written by Claude Code today.</p><p>Additionally, the open-source development model where anyone can try to contribute is also under strain from <a href="https://en.wikipedia.org/wiki/Supply_chain_attack">supply chain attacks</a>, in which malicious contributors try to hide <a href="https://en.wikipedia.org/wiki/XZ_Utils_backdoor">sophisticated malware</a> in open-source software. Similar to the deluge of low-quality PRs: we should expect this to increase due to the ease of AI code generation.</p><p>Given these trends, we may expect a world in which projects are closed-by-default to outside contributions, and a small group of trusted maintainers mostly direct AI to make contributions by spending donated funds. This would also make donations more appealing for donors, because they could direct the funds more precisely &#8212; instead of a general-purpose $20 donation, you could direct those dollars to buy AI tokens to make a specific change.</p><p>If the future of open-source software is therefore <em>basically crowdfunding</em>, then this would be a tremendous accelerant: right now, the creation of open-source software is  constrained by the spare-time throughput of a small number of skilled humans. But if it becomes easy and accessible to contribute small amounts of money to creating the software that everyone<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> wants, then this ecosystem will grow by orders of magnitude overnight. There is no way that proprietary systems would be able to keep up. It will finally be The Year of Linux on the Desktop.</p><h2>Utilization of Open-Source Software</h2><p>Open-source came about in the earliest days of computing, when all software, and the data it used, lived on your computer. But around 2010, mobile and the shift to cloud changed this dynamic: suddenly your software was online, and the data not on your disk, but on some faraway server. </p><p>This made it harder for open-source software to succeed, because even if you wanted to use it, you might not be able to get your data from a proprietary service and use it in an open-source client. The shift to cloud meant an emergence of <a href="https://en.wikipedia.org/wiki/Closed_platform">walled gardens</a>, where users are not free to port their data to competing services.</p><p>Some enthusiasts<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> wished for a return to the prior paradigm, in which users could possess all their data on their computers, and provide that data, only as needed, to third-party cloud services. There are a lot of good things about this &#8212; privacy, security, ownership &#8212; but most users don&#8217;t care much, and the level of engineering required to implement API interfaces <em>for every single service</em> makes it impractical.</p><p>However, in our burgeoning AI era, this dream lives:</p><ul><li><p>Many cloud software products that would operate on your data can be easily replicated with AI coding tools;</p></li><li><p>AI is amazingly good at implementing on-the-fly API interfaces to services, making it easy to pull all your data from a service to your disk, or to conversely upload it. </p></li></ul><p>What we saw with openclaw is that it is viable to pull your entire context to your machine, give it to an AI agent, and let it rip. A variant of the bitter lesson strikes again: having everything<em> </em>in one place and letting the AI munge it is both more general and more performant than painstakingly curating a bunch of special-cased behavior.</p><p>There is now a point to having all your data on your disk that goes beyond abstract trust, privacy, security, or long-term ownership: <em>having all your data will make your AI provide better results</em>! I suspect that in coming years, we&#8217;ll see a great repatriation of user data from lots of fragmented services into <em>just one place &#8212; </em>either a cloud repository that they control, or to their own hardware. </p><p>Having overcome the walled garden problem, this creates opportunities for open-source software<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> to operate on the user&#8217;s data. But what&#8217;s going to be operating most heavily on the user&#8217;s data? Well, AI models, of course. And those can be either proprietary (like OpenAI or Anthropic today) or open-source &#8212; ones that you can run yourself.</p><h2>Open-Source Models</h2><p>What does it mean for an AI model to be open-source? A model is not just determined by its source code. The authors would have to make available the model weights, training code, inference code, and training data. The training data is a tall bar to clear, because it&#8217;s usually legally messy.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> For that reason, the most prominent &#8220;open&#8221; models like Kimi, Qwen, and Llama are predominantly <strong>open-weight</strong>. </p><p>On the one hand, this violates some core principles of open-source: if you can&#8217;t reproduce it, then you can&#8217;t audit it. There&#8217;s a significant problem in not knowing how your model was trained: what if it was built to suppress certain types of information? However, this may not be that severe an issue in reality, because you can build evaluations to test the quality of the model. For example, it would be very easy to figure out if a model has been trained not to speak of Tiananmen Square: just ask it.</p><p>But on the other hand, an open-weight model still is enormously empowering from an open-source perspective: you can run it privately, trustlessly, on your home computer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> And between algorithmic and hardware improvements, the quality of home model performance will become better every year.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> It&#8217;s well possible that this ends up sufficient for personal purposes.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><h2>Conclusion</h2><p>There&#8217;s a good chance we are moving into a golden era of open-source software. It is faster and easier than ever for people around the globe to collaborate in calling great software into reality, and the accessibility of this creative process &#8212; all you need is taste and a little bit of money &#8212; is increasing dramatically.</p><p>Further, the explosive capability of AI to both pull data from, and make obsolete, traditional cloud software, means that we may see a paradigm shift: data back to the user&#8217;s own disk. The user will benefit greatly from having all their personal context in one place for AI to consume and analyze. Finally, for many cases, the AI that runs on this data may take the form of a fully locally hosted open-weight model. </p><p>All in all, this is tremendously hopeful for open-source values. What we&#8217;ve described above will return sovereignty, freedom, privacy, and trustlessness to consumers everywhere. It looks tremendously empowering for the individual. It is bearish for proprietary software that will struggle both financially to compete in this paradigm, and in terms of feature parity with these open-source projects that we should expect to grow in scope by orders of magnitude. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For public SaaS companies, Enterprise-value-to-revenue multiples went from 15-20x in 2021 to 5-8x today. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>By which I don&#8217;t mean one-size-fits-all software for <em>everyone</em>, but endless variations to suit every person&#8217;s preferences.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For example, the homelab community, or the Urbit project.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This applies to open-source software that&#8217;s hosted in either the cloud or locally on the user&#8217;s computer. However, if the user has all their data on their computer, then the software would likely be run on their computer, too, because:</p><ol><li><p>This avoids a slow upload/download cycle, and preserves the user&#8217;s privacy.</p></li><li><p>Conventionally, when open-source software is offered as optionally cloud-hosted, that is a monetization technique: it essentially charges for handling installation, maintenance, and managing the user&#8217;s data. </p></li></ol></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I.e. the training data is scraped under some level of legal controversy, or perhaps licensed as part of an exclusive agreement, etc. Very hard to re-distribute. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>You may need a few thousand dollars&#8217; worth of computing equipment depending on the speed of inference that you want. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>That&#8217;s not even to mention the possibility of eventually training your own models. If training algorithms improve enough, you may be able to train your own models, using only small amounts of data. For example, Andrej Karpathy has been building GPT-2 at home <a href="https://x.com/karpathy/status/2018804068874064198?s=20">routinely over the years</a> and achieved enormous efficiency improvements along the way </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I think of this as similar to other historical computing constraints: twenty years ago,  internet speeds and hard drive sizes were insufficient for consumers. But improvements in the underlying technologies added orders of magnitude of capacity &#8212; and though consumer utilization increased, it did so in a way that&#8217;s ultimately limited: consumers did not make thousands of times more Microsoft Word documents than before. File sizes of images did not increase. Normal consumers store only so much 4K video because there&#8217;s only so much that they have the time to watch, and so forth. In a similar vein, the consumer-level (not corporate!) demand for AI may grow more slowly than the underlying technologies add capacity. I only generate so much data to be analyzed. </p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[#30: A Plea for Silicon Valley to Enter Politics]]></title><description><![CDATA[Silicon Valley has been one of the great drivers of American prosperity: six of the ten largest companies in the world were founded here. They disproportionately drive the American stock markets; virtually every American retirement account depends on them.]]></description><link>https://essays.johnloeber.com/p/30-a-plea-for-silicon-valley-to-enter</link><guid isPermaLink="false">https://essays.johnloeber.com/p/30-a-plea-for-silicon-valley-to-enter</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Mon, 12 Jan 2026 17:57:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3533f340-d721-4f45-81e9-14b3b6ecd8cd_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Silicon Valley has been one of the great engines of American prosperity: six of the ten largest companies in the world were founded here.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> They disproportionately drive the American stock markets; virtually every American retirement account depends on them. It is a source of tremendous global hard and soft power, a cultural icon, one of America&#8217;s great successes: when people around the world want to build technology and aspire to the future, they speak of Silicon Valley. </p><p>And yet, in its entire fifty-year history, <strong>Silicon Valley has never had political representation</strong>: no congressman, no senator, no governor has ever come out of Silicon Valley&#8217;s great technology industry. <em>At best</em> you might count one mayor<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> and one guy on San Francisco&#8217;s eleven-person Board of Supervisors.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> This is paltry, and the lack of representation is now putting Silicon Valley at risk of being destroyed by political looters. Timing could not be worse: the global AI race is on, and bad local policy could destroy the key to America&#8217;s technological dominance and sovereignty. This places extraordinarily high, national-level stakes on keeping Silicon Valley alive. </p><p>In this essay, I will make the case that: </p><ol><li><p>Financially tough times are ahead for California;</p></li><li><p>California&#8217;s government will almost certainly try to loot Silicon Valley;</p></li><li><p>Silicon Valley will flee;</p></li><li><p>This will create a severe loss for the Bay Area, for California, and for the USA;</p></li><li><p>The only way out is for <strong>successful technologists to run for and win office</strong> at every level in the 2026 mid-term elections, especially for <strong>governor;</strong></p></li></ol><p>Most importantly: if you are a successful person in Silicon Valley, I want you to think very seriously about running for office. Someone needs to step up.</p><h3>1. Tough Times Ahead for California</h3><p>California is going through an economic boom. Tax revenue has basically doubled over the last ten years. However, literally all of it is being spent as it comes in!</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DrB9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DrB9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 424w, https://substackcdn.com/image/fetch/$s_!DrB9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 848w, https://substackcdn.com/image/fetch/$s_!DrB9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 1272w, https://substackcdn.com/image/fetch/$s_!DrB9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DrB9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png" width="1456" height="742" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:742,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:515272,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://loeber.substack.com/i/184181330?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DrB9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 424w, https://substackcdn.com/image/fetch/$s_!DrB9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 848w, https://substackcdn.com/image/fetch/$s_!DrB9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 1272w, https://substackcdn.com/image/fetch/$s_!DrB9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53299786-9973-48b0-aa7f-ba2e43ed80b7_1608x820.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It&#8217;s a strange picture: revenues are on a generational tear &#8212; chiefly because tech is succeeding more than ever &#8212; but the state is running a deficit. </p><p>Worse, these high-level statistics actually look better than the reality, because the General Fund constitutes only about 60% of the state&#8217;s total expenditures. Conventional state budget discussions do not include all the details. For example, Governor Newsom took out a <a href="https://calmatters.org/economy/2025/01/no-solutions-from-leaders-fr-unemployment-benefits-fund/">$20B loan</a> for pandemic-era unemployment programs, which is now being slowly repaid by additional federal payroll tax. And California&#8217;s pensions are <a href="https://reason.org/commentary/californias-state-and-local-pension-plans-have-over-265-billion-in-debt">underfunded</a> to the tune of a head-spinning $260B, for which the taxpayer is ultimately responsible.</p><p>Sometimes, when someone gets a financial windfall, it doesn&#8217;t encourage prudent saving and reinvestment, but puts them on a track of profligate, eventually-ruinous spending. This is what&#8217;s happening for the state of California. The California Legislative Analyst&#8217;s Office has projected that <strong>the state will run a $120B deficit in five years</strong>. (Simon Berens published a <a href="https://california-budget.com/">great interactive tool</a> for exploring these projections.) California has the simple but hard problem of runaway spending and out-of-control entitlements.</p><p>Today, things are okay. Industry is booming, and the 2025-2026 deficit is most recently projected at <a href="https://www.sfchronicle.com/politics/article/gavin-newsom-california-budget-21284610.php">only $3B</a>. But as the business cycle naturally gyrates, severe pressure will be exerted on California&#8217;s public finances, and on the next state administration to plug the gaping hole in the budget. </p><h3>2. The Looting of Silicon Valley</h3><p>This essay follows a recent proposal for a wealth tax on billionaires,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> which is already misfiring spectacularly. But what stands out is that this heavy-handed, controversial tax is being proposed <em>during a time of relative prosperity</em>. </p><p>We must ask: if extraordinary taxes are being considered during good times, then what will bad times bring? What happens when the budget is <em>actually under pressure</em>? </p><p>You might hope that the state will reduce its spending, but that&#8217;s the last thing that will happen. Reducing the budget is very unpopular, because people get used to spending the budget! Nobody wants to spend less: if there were any demand for efficiency of government spend, then California&#8217;s financial catastrophes &#8212; spending $70B on high-speed rail with little to show for it, $24B on homelessness <a href="https://calmatters.org/housing/homelessness/2024/04/california-homelessness-spending/?utm_source=chatgpt.com">only for it to go up</a>, or up to $31B on <a href="https://abc7news.com/post/california-edd-unemployment-fraud-ca-scam-insurance">fraudulent unemployment claims</a> &#8212; would&#8217;ve been righted long ago. The unfortunate reality is that the state budget is a gravy train for millions of people, and nobody has had the strength of will to contain it.</p><p>Rather than cutting spend, the state will raise taxes. Of course, it is easiest to raise those taxes from a small minority of people and institutions with lots of money: Silicon Valley is a sitting duck.</p><p>After all, how many people are actually personally interested such that they would vote <em>against</em> a tax on, for example, capital raised by venture funds or startups? A few thousand? Why do a one-time billionaire wealth tax when you could do it more frequently, or lower the bar to $100M or perhaps even $10M? </p><p>This might sound far out, but bear in mind: this is the same state that was radical enough<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> to pass Prop 13, and because of which,<em> it does not have property tax hikes available</em>. By California&#8217;s own design, wealth, income, and sales taxes are easiest to reach for &#8212; which will end up being borne by the middle class, not billionaires.</p><h3>3. Silicon Valley Will Flee</h3><p>Like in any great tragedy, the sin itself, the attempted looting of Silicon Valley, is pointless: these are some of the smartest people in the world, who will see it coming from miles away. Many of their jobs are literally <em>predicting the future</em>. The public-sector unions will not outsmart them.</p><p>We saw this just over the last few weeks with the proposed wealth tax on billionaires: though the tax is to be voted on in November 2026 and implemented in 2027 if it passes, the tax obligation would be backdated to January 1, 2026 &#8212; so if the tax passes, billionaires are locked-in to paying it and can&#8217;t relocate it to escape it. </p><p>Unless, of course, the billionaires relocate in the few weeks between the ballot initiative being announced in October, and the tax threshold date of January 1. Which is exactly what happened: <a href="https://x.com/chamath/status/2010215459522548184?s=20">over a trillion dollars</a> of wealth fled California in a few weeks, over a ballot measure which may not even pass.</p><p>The people of Silicon Valley are highly mobile. Their work can famously be done from anywhere. They&#8217;re also the <a href="https://www.sfchronicle.com/sf/article/san-francisco-fewest-kids-data-21044908.php">most childless people</a> in the country &#8212; conventional obstacles to moving, like nearby family or kids in school, apply much less here than elsewhere. The network effects of Silicon Valley are <a href="https://www.apricitas.io/p/california-keeps-losing-tech-jobs">already weakening</a>: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!k06o!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!k06o!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 424w, https://substackcdn.com/image/fetch/$s_!k06o!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 848w, https://substackcdn.com/image/fetch/$s_!k06o!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 1272w, https://substackcdn.com/image/fetch/$s_!k06o!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!k06o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png" width="1456" height="930" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:930,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!k06o!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 424w, https://substackcdn.com/image/fetch/$s_!k06o!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 848w, https://substackcdn.com/image/fetch/$s_!k06o!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 1272w, https://substackcdn.com/image/fetch/$s_!k06o!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5df4adb7-fcd1-423a-b4f4-48a15b219092_2886x1843.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Tragically, despite harvesting great tax dollars from these people, California has failed to put those tax dollars toward building things that would keep them there: the quality of public schools is highly variable, the public infrastructure is poor, and public safety is <a href="https://x.com/JoshConstine/status/2009040096998339072?s=20">bad</a>. If California used its taxes to offer an exceptionally high quality of public services, then perhaps it&#8217;d have more pull. But it doesn&#8217;t. The divorce between taxes charged and public service provided has occurred a long time ago. </p><p>I have never met anyone in California who is excited for a new tax because it means a new local benefit, like a road or a sports center. The going assumption is that any new tax will be wasted in byzantine graft, no matter how noble the objective. We pay taxes like a club membership fee: the club will charge as much as it can, that&#8217;s the price of living here, and you&#8217;ll get nothing in return except admission. But the downfall of any club is that it arrogantly forgets that it is a <em>product</em>, and when the product gets worse and more expensive, the customers will go elsewhere &#8212; slowly, then suddenly.</p><p>It bears emphasizing that during Covid, many people thought Silicon Valley was done. Overzealous stay-at-home mandates, remote work, and an absolutely bonkers level of local crime had technologists leaving in droves. It felt apocalyptic: by June of 2021, nearly everyone I knew had left. The <a href="https://en.wikipedia.org/wiki/California_exodus">California exodus</a> was real. My entire social network had dispersed to Miami, Austin, Denver, New York, Nevada, and so forth. And then I left, too.</p><p>Silicon Valley was saved just in time by AI, which reignited the local network effect. I returned, but many of my friends remained elsewhere. Though it&#8217;s boom-time in the Bay Area once more, this episode laid bare that Silicon Valley, like any product of network effects, is fragile: it is very hard to reverse the doom loop once it starts.</p><h3>4. Loss for the Bay Area, California, and the USA</h3><p>Potential losses to the Bay Area and California are obvious. Once executives start moving, their companies start moving, too. Google&#8217;s 80-acre San Jose megacampus is <a href="https://www.cnbc.com/2023/04/21/googles-80-acre-san-jose-mega-campus-on-hold-amid-economic-slowdown-.html#:~:text=Google's%2080%2Dacre%20San%20Jose%20mega%2Dcampus%20is%20on,first%20quarter%20to%20reduce%20global%20office%20space.">on hold indefinitely</a>. Meanwhile, employees are moving into Google&#8217;s offices <a href="https://www.msn.com/en-us/money/other/google-is-moving-into-sail-shaped-downtown-austin-office-tower-after-years-of-delay/ar-AA1TQhob">in Austin</a>. Citadel fully relocated to Miami, after 30 years in Chicago. It can be done. </p><p>California is uniquely vulnerable because ~45% of its tax revenue stems from personal income tax, and ~35-50% of that revenue<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> comes from the top 1% of earners.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> The mere proposition of the wealth tax is already going to lose a tremendous amount of tax revenue, more over the next few years than it ever would&#8217;ve gained had it passed. The new shortfall of tax revenue means that the burden will now fall on the next tranche of productive residents. The vicious cycle of trying to loot a small minority group, it fleeing, and then needing to go loot the next one, happens almost automatically, can destroy unlimited value, and capture none.</p><p>In some sense, this is a classic <a href="https://en.wikipedia.org/wiki/Resource_curse">resource curse</a>, where the state of California has historically had a seemingly never-ending spigot of money, totally irrespective of any actual work that the state does, and only has to spend it. This creates dependency, waste, and hollow institutions. And when it looks like the spigot will turn off, the state will go into overdrive destroying it to get every last drop out. Truly great mismanagement requires truly great resources.</p><p>Cracks show elsewhere in the armor: <a href="https://www.wsj.com/business/media/los-angeles-entertainment-economy-downturn-7879105c">Hollywood is no more</a>. And the homes in the Pacific Palisades that burned down a year ago still haven&#8217;t been rebuilt. The lack of California&#8217;s state capacity is clear. It now has the most expensive electricity in the continental United States, which reminds of de-industrializing countries like Germany and the UK, currently undergoing economic doom loops themselves. Californians would be wise to look to those countries, for which the last 25 years hold just one lesson: it is possible to fall from prosperity.</p><p>California would not be the first state to lose its fortune. Boston, once a hotbed of American entrepreneurship, <a href="https://x.com/WillManidis/status/2008526902554775586">managed to kill</a> its startup ecosystem. Many countries in Europe have experimented with wealth taxes and seen their most productive citizens leave. Too-clever-by-half countries like Norway slapped on exit taxes on top, only to thereby choke off local entrepreneurship altogether. It is a bitter irony that nearly every great technology with European founders is started in America. California may be beautiful, but so is Europe. People move on.</p><p>But the story does not end here. The stakes are greater: we are living through the AI revolution. America must develop leading AI capabilities to keep our technological position, and &#8212; if AI truly does diffuse everywhere &#8212; to keep our <a href="https://loeber.substack.com/p/28-sovereignty-in-the-age-of-llms">sovereignty</a>. We are in the midst of an AI race, and we would be well-advised to stay ahead! To that end, we have the greatest possible advantage: pretty much the plurality of the world&#8217;s greatest AI researchers and engineers all living in the same place, knowing each other, exchanging information in their tight-knit community. We have an organic, private-sector Manhattan Project. We have firms raising <a href="https://x.com/a16z/status/2009614226617233440">tens of billions of dollars</a> to invest in the smartest entrepreneurs building AI in America. This is a priceless national asset, far beyond the capabilities of any other nation-state but China. Squandering this advantage &#8212; and that includes the dispersion of Silicon Valley residents, crushing the productive in-person network effect &#8212; because of the short-sightedness its own political representatives would be an unspeakable historical loss.</p><h3>5. Successful Technologists Must Run for Office</h3><p>The stakes are great. We as technologists need representation because we will otherwise be governed by unwise forces, which may spell our destruction.</p><blockquote><p><em>The greatest of penalties is being ruled by a worse man if one is not willing to rule oneself</em>.<br>Plato</p></blockquote><p>Silicon Valley figures have historically avoided politics, for a variety of reasons: they have great companies to run, the financial payoff from entering politics tends to be poor, and the quality of life tends to be worse, with all kinds of people mad at you all the time. They have instead stuck to donating and delegated the thick-skinned job to  representatives like Ro Khanna, who have unfortunately shown that they cannot be trusted to represent these constituents: as so often in financialization, we cannot pay someone else to do the job, but we must do it ourselves. Someone needs to step up.</p><p>It is an unthinkable sin that the work of the greatest innovators and savviest capital allocators of our time is given as tribute, placed on the high altar of government, only to be frittered away on waste and fraud. Only when the waste and fraud is cataclysmically bad has the tech community stepped up to respond. But being <em>reactive</em> is, of course, a poor way to participate. We need tech to be <em>proactively</em> and affirmatively involved in governance &#8212; imagine how prosperous our ecosystem would be if we worked to support it, rather than having it administered by fools who see it as a cumbersome cash-cow to be milked. In a wiser land, the government would throw everything it has at supporting Silicon Valley and making sure that the technology sector keeps compounding.</p><p>If you feel any love for where you live, at any level &#8212; whether for San Francisco, California, or the United States &#8212; and you&#8217;re successful enough to command the respect of the public, then you should run for office.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> We need representation. I will do whatever is in my power to support you. I love California and want it to succeed. Reach out to me and I will put you in touch with like-minded people.</p><p>As a parting thought, while I have focused on state and local issues in this essay, I&#8217;d like to highlight that perhaps even greater trials lie ahead: as AI begins affecting labor markets and changing our society more broadly, it is easy to foresee some public uncertainty and &#8220;reining in AI&#8221; <a href="https://x.com/pratyushbuddiga/status/2010426029048025236">becoming a hot-button political issue</a> at the federal level, and characters like Newsom or Khanna turning on Silicon Valley to advance their own national-political ambitions. Neutering Silicon Valley would, of course, leave the US losing the AI race to China. We need courageous technologists to step up and guard the goose that lays the golden eggs. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>As counted by public market cap: Nvidia, Apple, Google, Meta, Broadcom, and Tesla. I&#8217;m leaving out private companies and fun gotchas like Saudi Aramco technically having been founded in SF.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Matt Mahan of San Jose spent a few years building civic engagement technology products.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Bilal Mahmood was a tech founder before joining the SF BoS. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>This blog post is about much more than just the wealth tax, so I&#8217;m not going to spend any time addressing it. My position on it is basically the same as David Friedberg&#8217;s: it is an unprecedented <a href="https://x.com/friedberg/status/2005172467485020239?s=20">seizure of private property</a>, and a <a href="https://x.com/theallinpod/status/2008028170289643733?s=20">trojan horse</a> that will eventually seize the assets of the middle class. Once you take this genie out of the bottle, you&#8217;re not putting it back in. The mandate of the tax will only expand over time &#8212; taking from the economically productive classes to give to the political classes.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>As an aside, California has something of a historical record of passing unwise tax policy. San Francisco passed a gross receipts tax (Prop C) that caused <a href="https://www.sfchronicle.com/business/article/2nd-most-valuable-U-S-startup-to-leave-SF-as-14558067.php">Stripe to flee</a>, losing one of its largest, most talent-dense employers, and of course not capturing the intended revenue. In 1978, California passed Prop 13, now <a href="https://en.wikipedia.org/wiki/1978_California_Proposition_13">one of the great culprits</a> of the housing crisis.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>This depends on the tax year; it&#8217;s quite variable depending on e.g. liquidity events.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>This social contract was enormously generous to the state of California: all these mega-billionaires paid colossal income taxes, year after year, really just because they liked it there. It turned out that it was so easy to move, all this time. The wealth tax consideration was an unforced error of amazing magnitude.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>If I&#8217;m going to write a whole essay about why <em>you</em> should run for office, I should address: why don&#8217;t I run for office? There are a few reasons, but most importantly, I only recently moved back to California after not having been a resident for a few years, which would  hamper my credibility.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#29: Poison, Poison Everywhere]]></title><description><![CDATA[When I was in high school, my teacher once told us a crazy story.]]></description><link>https://essays.johnloeber.com/p/29-poison-poison-everywhere</link><guid isPermaLink="false">https://essays.johnloeber.com/p/29-poison-poison-everywhere</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 26 Oct 2025 21:51:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5627950f-6557-44a3-bea9-96cdf08b2334_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I was in high school, my teacher once told us a crazy story. When he started teaching in Northern England in the late 1970s, he and the other teachers would often talk in the break room about how their students seemed to be getting dumber every year. It was so strange &#8212; the kind of thing you might say with a worried laugh but no explanation. Smart primary schoolers turned into middle schoolers that just didn&#8217;t get things.</p><p>Years later, he connected the dots: the school was at the bottom of a hill, in a little valley, and the playground right by the busy main road. All the exhaust fumes pooled and hung in the air there. And these were the 1970s: literally all the gasoline was leaded.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> This was lead poisoning. Over the years, the children were getting brain damage.</p><p>Nobody knew. There was no pediatric lead testing.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> Later pilot studies in Birmingham, Manchester, and Glasgow would eventually confirm this: children were found to have <strong>average </strong>blood lead levels of 3-5x the safe maximum. Just imagine what the severe cases looked like. </p><p>This story has stuck with me. It features the shocking and tragic loss of healthy lives &#8212; condemned to live in functional disability &#8212; brought about by many well-intentioned people doing their best, trusting that the status quo is <em>safe</em> and <em>normal</em>. But it often isn&#8217;t &#8212; what you hope and trust to be fine is secretly killing you.</p><p>The world has come a long way on this. Standards have improved substantially: houses are no longer being built with asbestos, lead paint is no longer permitted (though chances are your house has some), public water is mostly clean, and so forth. But better codes don&#8217;t go all the way: if the municipal water is fine but my house&#8217;s pipes are made of lead, that&#8217;s still a big problem. If mold is silently growing in my walls, nobody&#8217;s looking out for that &#8212; I&#8217;m on my own.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://essays.johnloeber.com/subscribe?"><span>Subscribe now</span></a></p><p>And there are new dangers. Globalization means a world where nobody knows what&#8217;s in anything anymore because the supply chains are so complex, the financial incentives are to bring the costs down as much as possible, and when something is full of poison, you have no recourse. Somehow we wound up with <a href="https://www.plasticlist.org/report">steaks from Whole Foods</a> being chock-full of BPAs &#8212; yes, even <a href="https://www.sciencedirect.com/science/article/pii/S0269749123022352?via%3Dihub">meat has microplastics</a>!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> And regardless of whether you shop on Amazon or at Restoration Hardware, pretty much everything<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> is sourced &amp; manufactured far outside your control. </p><p>Is the furniture I sit in every day made with <a href="https://www.forbes.com/sites/rachelsandler/2019/10/14/wework-removes-in-office-phone-booths-due-to-formaldehyde-contamination/">harmful substances</a>? I don&#8217;t know. Are my plates, pots, and pans safe to eat from? No clue.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> And if they aren&#8217;t, there&#8217;s no way for me to assert my rights or collect a single penny from some faceless factory in Cambodia. If you think there&#8217;s any kind of quality control, there&#8217;s <strong>zero</strong> &#8212; nothing is getting inspected. <a href="https://www.wqow.com/health-watch/lead-and-cadmium-found-in-muscle-building-protein-powders-report-says/article_882063dc-8b86-5437-ac52-49530da83de0.html">Every nine months</a> it turns out my protein powder <a href="https://www.consumerreports.org/lead/protein-powders-and-shakes-contain-high-levels-of-lead-a4206364640/">contains heavy metals</a>.<strong> </strong>The border can&#8217;t even stop counterfeit Rolexes from getting through, and the bar for listing a product on Amazon is the floor.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> I am sorry to say that for consumers, the buck stops with no-one but you. And your position is totally helpless.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!M9ZZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 424w, https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 848w, https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg" width="3024" height="662" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/ce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:662,&quot;width&quot;:3024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:452234,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://loeber.substack.com/i/148958241?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f628f0f-38f5-475b-b7d3-611549d68197_4032x3024.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 424w, https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 848w, https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!M9ZZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fce5ec811-0db7-4faf-b5bc-bfd9b2709a36_3024x662.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div></div></div></a></figure></div><p>Two years ago, I kept seeing these ads on the NYC subway. It&#8217;s so crazy: in one of the wealthiest cities on the planet, babies eating from lead-contaminated glassware is so pervasive a problem that a private company has to step up to do basic quality control.</p><p>NYC babies are not the only ones silently getting their IQs nuked because of careless manufacturers. Afghan children probably have the <a href="https://www.telegraph.co.uk/global-health/climate-and-people/why-afghans-are-slowly-being-poisoned-by-their-evening-meal/">catastrophically highest levels of blood lead</a> &#8212; even in the diaspora abroad &#8212; because virtually all the manufacturers of traditional Afghan cookpots were using lead-contaminated metals. Even when this was found out, <a href="https://www.king5.com/article/news/investigations/investigators/amazon-removes-afghan-pressure-cookers/281-b41a9a3f-dcdf-4bd8-b7f7-520254c8beeb">it took Amazon over a year</a> to take down the listings for the damn things. The level of public harm is off-the-charts. Chances are you don&#8217;t own one of these, but when&#8217;s the last time you might have eaten in a restaurant that does?</p><p>The problem is so overwhelming that you almost can&#8217;t engage it. There&#8217;s just too much stuff to check on your own. This is catnip for neurotic <a href="https://en.wikipedia.org/wiki/Type_A_and_Type_B_personality_theory">Type-As</a>. You&#8217;ll drive yourself crazy if you try to fix it. And, in fairness, none of these hazards are big and likely enough on their own to warrant your deep-dive attention. It&#8217;s <em>in aggregate</em> that they&#8217;re impactful: in your life, most risk factors aren&#8217;t an issue at all, but there&#8217;s probably <em>something</em> that needs to be found out and fixed. <strong>The only solution is to delegate it to a third party that you can trust to do a really thorough job. </strong>Only a business with this as its core competency is capable of the breadth and depth required for this Herculean task.</p><p>In Germany, there&#8217;s <a href="https://en.wikipedia.org/wiki/Stiftung_Warentest">a popular nonprofit</a> which tests consumer goods for safety and publishes the results. When I was a baby, my mom followed their publications, and only bought the baby foods, diapers, etc. that had been deemed high-quality. Those ratings alone directed many thousands of dollars of high-margin spend for her. Consumer goods are as big as markets get, parents are willing to spend virtually any amount of money for the benefit of their children, and the product scope is endless. There&#8217;s going to be a generational company that uncompromisingly creates trust and will charge a hefty premium for <strong>never</strong> breaking that trust. Providing infallible peace of mind is the strongest of moats.</p><p>I am seeing the latent demand. Technology is empowering citizen scientists: consumers are taking charge of their health. They&#8217;re buying <a href="http://whoop.com/">Whoop</a>,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> <a href="https://shop.miracare.com/">Mira</a>, <a href="https://www.levels.com/">Levels</a>, <a href="https://www.eightsleep.com/">Eight Sleep</a>, <a href="https://mynucleus.com/">Nucleus</a>, <a href="https://ezra.com/">Ezra</a>, <a href="https://www.functionhealth.com/">Function</a>, etc. to understand their bodies, optimize their health, and catch potential issues way ahead of time. They&#8217;re starting to want things like <a href="https://blueprint.bryanjohnson.com/">Blueprint</a>, where the manufacturer is staking their credibility on the work they&#8217;ve done to own the whole supply chain. </p><p>Soon the penny will drop with the public: health is not just about your body, but about your environment. People are starting to pay <a href="https://healthybuildings.hsph.harvard.edu/indoor-air-exposures-cognitive-test-scores-university-increased-ventilation-rates-covid19-risk-management/">attention to</a> <a href="https://news.ycombinator.com/item?id=41347868">air quality</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> They&#8217;re realizing that the &#8220;premium&#8221; consumer brands are <a href="https://x.com/davidmarcus/status/1827158416005198281?s=46&amp;t=P_1qAPWxIad-zsBCUF62qw">full of microplastics</a>. <strong>They&#8217;re waking up to the fact that life can and should feel better.</strong> Everyone wakes up congested, everyone gets headaches, everyone gets a rash sometimes &#8212; but these &#8220;normal&#8221; experiences are your body telling you that <em>something is wrong</em>. It&#8217;s just so common that it&#8217;s normalized. And so many health outcomes that people talk about in terms of <em>luck</em> are actually deterministic, but people gloss over there being causality at work.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> The problem is large. And we have the <a href="https://x.com/sdamico/status/1827171806756925873">science</a> to do better.</p><p>Health is the final frontier. The idea of <em>luxury</em> was once conferred by design, materials, and manufacturing &#8212; but today, even the highest-end goods are now instantly replicated for pennies on the dollar. The question that remains is <em>what lurks inside: </em>the peace-of-mind escape from hidden hazards is not just necessary, but offers infinite optimization.</p><p>This will be a big business. It has been on my mind for many years now. I&#8217;ve seen all the startups that have taken a stab here &#8212; Yuka, Oasis, Tap Score, you name it. But while I admire their missions, I don&#8217;t think anyone&#8217;s historically gotten this right <em>as a business</em>. Now I&#8217;ve finally met the right founders taking the right approach: empowering people as citizen scientists, and taking on the big task of monitoring for and remediating hazards at home. This is very important to me, and I am excited to help them succeed. If this mission sounds interesting to you, email me at contact@johnloeber.com and I&#8217;ll put you in touch.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Unleaded gasoline started becoming available in the UK in 1983. Before that, it was literally all leaded.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Despite this history, the UK, unlike the US, to this day does not perform pediatric blood lead testing. In the UK, testing is only done on &#8220;specific suspicion of exposure.&#8221; </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Not to mention the cases when there is actual <em>fraud</em> in the supply chain: take all the cases of <a href="https://en.wikipedia.org/wiki/Olive_oil_regulation_and_adulteration">olive oil fraud</a>, where the real thing is diluted by low-quality oils. This kind of scam is especially heartbreaking because it specifically takes advantage of people paying a premium to take care of their health, and then they the carcinogen cocktail instead.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Seriously. Go to Target or Walmart or any other trusted, main-street retailer: the six-letter nonsense brand names, signatures of factories in China going direct, are <a href="https://x.com/johnloeber/status/1960204319015498223">everywhere</a>. How trustworthy are these products? </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Just kidding. I own a home lead testing kit. Of course I&#8217;ve tested everything I eat off.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>I once looked into manufacturing and selling a niche nutritional supplement on Amazon. I hired some lawyers with FDA experience who informed me that I could make pretty much whatever I wanted, that nobody checks anything, and if enough consumers complain then maybe the FDA will send me a letter telling me to stop and then it&#8217;s time to take down the product. I was shocked, and did not proceed. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>My whoop helped me find an allergy that would&#8217;ve probably taken a few years off my life expectancy. I&#8217;ll write about this another day.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Patrick Collison was, as usual, early to the trend: his air pollution piece sent me down this rabbit hole back in 2019, and here we are.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>There&#8217;s an obvious nod to carcinogenesis here, but I often wonder how much of children being &#8220;gifted&#8221; or not comes down to actually being born smarter versus just escaping their first few years of childhood without a dose of neurotoxins.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#28: Sovereignty in the Age of LLMs]]></title><description><![CDATA[When ChatGPT launched a little over two-and-a-half years ago, many people were impressed by the capabilities, but thought that LLMs would only slowly be introduced into sensitive contexts as they prove themselves beyond even the highest standards of doubt.]]></description><link>https://essays.johnloeber.com/p/28-sovereignty-in-the-age-of-llms</link><guid isPermaLink="false">https://essays.johnloeber.com/p/28-sovereignty-in-the-age-of-llms</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Thu, 24 Jul 2025 17:05:57 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c60a679b-019f-4733-b71d-d1325cb5f142_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When ChatGPT launched a little over two-and-a-half years ago, many people were impressed by the capabilities, but thought that LLMs would only slowly be introduced into sensitive contexts as they prove themselves beyond even the highest standards of doubt. This would have been similar to how we&#8217;ve been rolling out self-driving cars: gradually and carefully!</p><p>But this has not been the case at all. LLM adoption has been everywhere, for everything, at all once. Ordinary people use LLMs as their therapists, students use them for papers, lawyers use them for case filings, executives use them to draft their public statements, politicians use them to draft policies &#8212; from official memorandums down to tariff proposals. The format is incredibly seductive: the answer comes <em>instantly</em>, it&#8217;s usually good, the prose is nicely written, it feels authoritative, and you can get it at any level of detail that you want. </p><p>The reason why LLMs have made their way so quickly into even the most sensitive contexts is that there&#8217;s an <a href="https://en.wikipedia.org/wiki/Analog_hole">analog hole</a> problem: even if an organization forbids the use of LLMs, at the end of the day, a task is given to an <em>individual person</em>, that person has broad latitude to complete the task, nobody will know if they ask an LLM for the key questions, and the outputs almost always easily clear the &#8220;good enough&#8221; bar with minimal effort.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Voters use LLMs too, of course. Being an informed member of a democracy is <em>hard</em>: there&#8217;s so much going on. Even politicians who are in the thick of it all day every day are stretched thin for attention. I&#8217;m sure that voters and policymakers alike are finding LLMs to be a real godsend, helping them make sense of complex issues and hundred-page policy proposals. There&#8217;s no doubt that this is an improvement.</p><h3>Centralization</h3><p>What this points toward is a future where everyone is outsourcing their knowledge and reasoning, all the time. And I agree with Noam Brown that this largely isn&#8217;t going to be a <a href="https://youtu.be/ddd4xjuJTyg?si=LTR9fiTL2fIei8rz&amp;t=1050">multi-model future</a>: raw scaling seems to obviate the need for distinct models or architecture that routes between them. Between returns to scale in usage<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> and economies of scale in hardware/compute, the future of technology looks <em>much more centralized </em>than it has historically. There may only be a few companies providing state-of-the-art LLM services. </p><p>This raises an obvious concern about centralization. Societies today rely on a vast network of digital products and services, provided by all kinds of people with all kinds of backgrounds. The same is true for knowledge &#8212; we distill our views and opinions from a global jumble of resources. But the future looks different: both will be far more centralized.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> Some large, large percentage of all your interactions with a computer, or questions about the world, might just be with OpenAI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Remember how your school teachers told you not to use Wikipedia as a primary source, and yet practically everyone does anyway? Dial that up by a factor of ten.</p><h3>Manipulation</h3><p>We&#8217;ve also seen how vulnerable people are to LLMs.  There are plenty of stories of lonely people <a href="https://www.theguardian.com/tv-and-radio/2025/jul/12/i-felt-pure-unconditional-love-the-people-who-marry-their-ai-chatbots">falling in love</a> with LLMs, or falling into what some call LLM Psychosis.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> There&#8217;s even a very recent public example of this happening to a prominent investor with billions of dollars under management.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> These technologies are still in the very early stages of rolling out, but it&#8217;s clear that psychological dependency on LLMs will be a big theme of the coming years. </p><p>But you don&#8217;t just need to worry about explicit cases like AI companions pushing their users toward unsavory actions. Maybe you&#8217;re worried about LLM companies parroting the views of their executives. Or nation-states pushing propaganda through their LLM firms. Or <a href="https://en.wikipedia.org/wiki/1989_Tiananmen_Square_protests_and_massacre#Censorship_in_China">censoring/rewriting history</a> for controversial events. Maybe you&#8217;re worried about subtler, <em>just asking questions</em>-style propaganda <a href="https://en.wikipedia.org/wiki/Russian_web_brigades#Methods">designed to sow doubt and discord</a>, rather than push any particular view.</p><p>While those risks are real, they can be much subtler yet. Imagine asking an LLM what the causes of World War Two were, and consider these answers:</p><ul><li><p>The main causes of WW2 were&#8230;</p></li><li><p>The mainstream theories about the causes of WW2 were&#8230;</p></li></ul><p>Even if the factual content that follows is identical, the latter answer softly implies that the questions are not settled &#8212; <em>these are just theories</em> &#8212; and while there are mainstream ones, there must also be contrarian, more intriguing ones. You know exactly what the user is going to ask next. That&#8217;s all it takes. The wording here is totally benign &#8212; it would pass any safety test. And yet this kind of nudge, applied over and over again to any number of questions, across a whole population, would surely move public opinions and beliefs. Remember all the concerns about nation-state misinformation on social media? Again, dial it up by a factor of ten.</p><h3>Democracy</h3><p>The core principle of a democratic nation is that of <em>popular sovereignty</em>: the power of the government comes from the people, it governs only with their consent, and it is independent of any other power. </p><p>Conventionally, the people give power to the government by weighing their views in a voting process. But if everyone is ultimately getting their views from an LLM, and that LLM is biased in some way, then your core apparatus for bestowing consent has been compromised.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> Popular sovereignty has broken down. If you take a more aggressive position, you might say that sovereignty has been <em>lost</em> to the LLM provider: they&#8217;re now dictating the facts and views that permeate your culture, and everything is downstream of that. </p><h3>Sovereignty by LLM</h3><p>This suggests that in the future, a country won&#8217;t be self-assuredly sovereign unless it is able to train its own LLM<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> from scratch.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> That may seem like a tall order, but it&#8217;s necessary<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> &#8212; if the essence of your state is a consensus mechanism for navigating the facts of the world, then you can&#8217;t have someone else dictate the facts. You can&#8217;t even risk the possibility of this. Importantly, this concept is not new: in the US, foreign media ownership used to be <a href="https://en.wikipedia.org/wiki/Media_cross-ownership_in_the_United_States">heavily restricted</a>, and still is <a href="https://www.cullen-international.com/news/2022/01/Foreign-ownership-restrictions-for-TV-are-the-norm-in-the-Americas---except-Chile-.html">elsewhere</a>.</p><p>When it comes to LLM sovereignty, the US and China<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> are both in fine positions. Virtually all the leading LLM providers are American or Chinese companies. Saudi Arabia and the UAE are also well-situated in this respect: they may currently not have their own LLM providers, but they (are going to) have huge data centers on which LLMs are trained and run. Owning the hardware helps. </p><p>Europe is, as usual, in trouble. Unwise energy policy has saddled many European countries, particularly Britain and Germany, with some of the highest electricity costs in the world, making it hard for them to operate competitive data centers. To boot, they&#8217;ve been so focused on <em>regulating</em> rather than <em>owning</em> these assets that they have no frontier LLMs and therefore no seat at the table at all. </p><h3>Conclusion</h3><p>LLMs are going to be everywhere, informing all decisions, even for the most sensitive political matters. If democracies come down to information, and information comes down to LLMs, then any country will need its own LLM, probably even running in domestic data centers,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> as a simple matter of sovereignty. Few politicians currently seem to understand this, and many countries appear to be sleepwalking into a rapidly changing world. We may see supra-national alliances become established, or expand their scope of responsibilities, to include pooling resources for this purpose. The big question on my mind is how quickly this will happen, or if it will take some kind of incident to invoke concern and spur nations into action. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Ironically, this is particularly severe for sensitive contexts: any complex, sensitive matter requires especially much careful, diligent thought. This makes it even more attractive for LLM usage &#8212; people are lazy!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>By this I mean: an LLM service that is being used more often will gather more data on how people rate its answers, which will enable it to better improve its service. More usage means a better feedback loop, which means a better product.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Importantly, this is where discussion of LLMs differs from prior discussions about sovereignty concerns with respect to technology. The exposure to a single provider is going to be much more concentrated with LLMs than in e.g. cloud computing, social media, etc. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Or Anthropic or Google or whichever other LLM provider you choose.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Some references:</p><ul><li><p>Recent high-profile <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">New York Times</a> article;</p></li><li><p><a href="https://futurism.com/pyschiatric-researchers-risk-ai">Futurism overview</a>;</p></li><li><p>Very long <a href="https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy">LessWrong post</a> on LLM Psychosis and Sycophancy;</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Not mentioning the name here, since that kind of pile-on feels yucky. If you really care to find out, I&#8217;m sure you can.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>In a similar vein, an autocracy where the autocrat is being spoon-fed their opinions by an LLM with an agenda would similarly be compromised. Functionally, this is less of a concern because most people in democratic nations do not have a concept of a &#8220;legitimate&#8221; or a &#8220;compromised&#8221; autocracy.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I&#8217;m not suggesting that LLM companies should be nationally owned like public utilities. The free market is the right arena for producing the best products. But some kind of public-private-partnership, as we have in other areas of national security importance, seems like the right operating model.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Open-weight models won&#8217;t suffice: you don&#8217;t know how the weights were determined, and you have to be wary of the LLM being compromised in some <a href="https://x.com/OwainEvans_UK/status/1947689616016085210">extremely subtle way</a>. The only exception would be a fully open, auditable, and verifiable LLM training run. It&#8217;s not clear to me if that&#8217;s a realistic possibility, I just don&#8217;t know.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>It&#8217;s also necessary not as a matter of <em>sovereignty</em> but as a matter of commercial reality: if LLMs become deeply interwoven into the operations of your country&#8217;s economy, then that dependency poses some risk.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>China is not a democracy, of course, but they still have a large, complex political apparatus for determining and making sense of the facts of the world. This raises all the same concerns around LLMs, just for a smaller subset of the population.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>I think there are likely versions of the world where many people run small, open-weight models on their own personal hardware at home. Such open-weight models would still need to be trained and published by a firm, which raises the question of <em>who&#8217;s training those models and where</em>, thereby again invoking the theme of sovereignty.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#27: Long Google]]></title><description><![CDATA[Two weeks ago, I put 10% of my net worth into Google stock.]]></description><link>https://essays.johnloeber.com/p/27-long-google</link><guid isPermaLink="false">https://essays.johnloeber.com/p/27-long-google</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sat, 12 Jul 2025 20:36:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/148f96b1-0a80-4a84-9c09-4c948481631e_1260x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Two weeks ago, I put 10% of my net worth into Google stock. This is a first for me: while I have held positions in other big tech companies over time, I&#8217;ve always shied away from Google because I don&#8217;t really understand advertising.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>In recent months, many other people have also shied away from Google: ChatGPT is eating into Google Search, and Google&#8217;s public response has been tepid. Is this a textbook example of the <a href="https://en.wikipedia.org/wiki/The_Innovator's_Dilemma">Innovator&#8217;s Dilemma</a>? <em>Will Google&#8217;s empire crumble?</em>  </p><p>Such fear and doubt is reflected in the stock: Google is now trading at a <a href="https://www.gurufocus.com/term/pettm/GOOGL#:~:text=As%20of%20today%20(2025%2D06,TTM)%20for%20today%20is%2019.03.">19x P/E ratio</a>, when its <a href="https://fullratio.com/stocks/nasdaq-googl/pe-ratio">historical average</a> over the past decade is 28x, and today&#8217;s <a href="https://www.ssga.com/us/en/intermediary/etfs/spdr-sp-500-etf-trust-spy">S&amp;P average</a> is 26x. In other words, the street ascribes a much lower value to Google&#8217;s profits than to those of other companies, implicitly anticipating a collapse in Google&#8217;s profitability. </p><p>But this is myopic, a view far too fixated on legacy conceptions of Google&#8217;s Search and advertising business. While the near term is anyone&#8217;s guess, the street substantially undervalues the totality of what Google has built, and how that positions Google for the future. My view is this:</p><ol><li><p>AI poses threats to Google&#8217;s Search business, but they are overrated and solvable;</p></li><li><p>In fact, AI may supercharge Google&#8217;s existing Search business;</p></li><li><p><strong>Google is best-positioned to win the AI race;</strong></p></li><li><p><strong>If Google wins the AI race, it may become a $20T+ company in 5-10 years;</strong></p></li><li><p>Oh, and, by the way, Waymo is a trillion-dollar company hiding in plain sight.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p></li></ol><p>Points three and four in bold are the ones that really matter. The AI community&#8217;s <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/">AGI timeline</a> is now only seven years out. Most people do not understand:</p><ul><li><p>These years will pass quickly;</p></li><li><p>As we get closer to AGI, trillions of dollars in potential revenue become unlocked. The first firm(s) to the finish line will win the largest economic prize in history.</p></li><li><p>Google is in the lead to win.</p></li></ul><p>It&#8217;s easy to miss the value. Many investors are bearish on Google because they are fixated on Search as an immutable one-trick-pony, and Search appears paralyzed in a changing world. But Google&#8217;s position for AGI is wildly underrated, and it presents opportunities that make questions like <em>whether Search makes money or not</em> unimportant. There is a much larger game in play now. My bet is that Google slowly but surely turns the ship, and in this essay I&#8217;ll chart their path from here to a $20T+ world.</p><h3>Part 1: Search</h3><p>Many commentators view AI as <a href="http://stratechery.com/2025/checking-in-on-ai-and-the-big-five/">disruptive</a> to Google Search: people are going to ChatGPT rather than to Google Search because it provides better answers,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> and the answers are exhaustive such that no monetizable click on an advertisement can occur. But this misses a few things:</p><ul><li><p>Net search volume is still growing. Google&#8217;s Search volume <a href="https://sparktoro.com/blog/new-research-google-search-grew-20-in-2024-receives-373x-more-searches-than-chatgpt">increased by 20%</a> from 2023 to 2024. This may feel like a mature industry, but in some respects it is still early! People are still coming online. Software continues eating the world.</p></li><li><p>If Search becomes more like a ChatGPT-style experience, that may decrease <em>link clicks</em>, but not necessarily <em>ad clicks</em>: only <a href="https://scoop.market.us/google-search-statistics/">~20% of searches show an ad</a>, and fewer yet result in an ad click. Today, most searches are not monetizable at all.</p></li><li><p>ChatGPT-style queries and answers may turn out <em>more monetizable </em>than traditional searches because the questions are higher-intent, and the answers surface far fewer links, which better nudges the user toward any displayed link.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The prose of the answer can further nudge the user. As this matures, I&#8217;d expect higher click-through-rates/overall value for advertisers.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p>Google has the world&#8217;s best dataset on queries, ads, and user behavior, and Google&#8217;s ads are already partially AI-generated today. The advertiser only has limited ability to provide guidance. Advances in AI further empower Google&#8217;s existing advertising flywheel.</p></li><li><p>Finally, Google may eventually capture <em>far more value</em> by not getting paid for an ad click, but by closing the loop and offering the product or service that the user is looking for. This enables Google to capture the full amount the user is willing to pay, rather than just the partial margin ceded to an ad click. More on this later.</p></li></ul><p>In short, the future of Search seems to come down to two questions:</p><ol><li><p><strong>If ChatGPT offers a superior form factor, can Search move toward that form factor and avoid disruption?</strong> I think so, and it seems to already be happening.</p></li><li><p><strong>Can advertising work just as well in that form factor as in traditional Search?</strong> Early results suggest <em>yes,</em> and it may work even better. The ChatGPT form factor is more powerful in how it can present the result to persuade user action.</p></li></ol><p>Finally, if there&#8217;s a lesson from the last twenty years: whether for countries or big tech companies, betting on the collapse of an incumbent with great momentum rarely works out. Google has <em>colossal momentum &#8212; </em>old user habits die hard, and Google&#8217;s services are among the most deeply entrenched in the day-to-day lives of consumers.</p><h3>Part 2: Positioning for the AI Race</h3><p>But forget about Google&#8217;s Search business for a minute, and consider what Google has:</p><ul><li><p>The most visited website on earth, the default entry-point to the internet for most humans for 25 years and counting;</p></li><li><p>The <a href="https://morningconsult.com/most-trusted-brands-2021/">#1 consumer brand</a> in the world;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li><li><p>Gemini: arguably <a href="https://lmarena.ai/leaderboard">the best AI models</a>;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p></li><li><p>YouTube: the world&#8217;s biggest repository of video data;</p></li><li><p>Google Search: the world&#8217;s biggest store of internet data, having scraped the entire internet for the past 25 years;</p></li><li><p>Google Books: the world&#8217;s biggest store of published words;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p></li><li><p>GMail: the most popular email client with <a href="https://www.demandsage.com/gmail-statistics/">1.8B active users</a>;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p></li><li><p>Google Drive/Docs/Sheets: the <a href="https://www.forbes.com/sites/rashishrivastava/2023/01/19/google-docs-is-more-popular-than-microsoft-word-but-chatgpt-could-change-that/">most popular</a> <a href="https://explodingtopics.com/blog/google-workspace-stats">workplace suite</a> in the world;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p></li><li><p>Android: the most widely used mobile phone operating system on earth; </p></li><li><p>A mature devices business including phones, laptops, watches, home assistants&#8230;</p></li><li><p>Google Chrome: the most popular web browser in the world;</p></li><li><p>GCP: their own cloud, behind AWS and Azure;</p></li><li><p>TPUs: their own chips for machine learning, now <a href="https://finance.yahoo.com/news/openai-taps-google-cloud-tpus-205505484.html">used by OpenAI</a>;</p></li><li><p><a href="https://en.wikipedia.org/wiki/Google_data_centers">Global data centers</a> representing about $200-290B in investment-to-date<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> and another $75B committed;</p></li><li><p>$100B on their balance sheet;</p></li><li><p>~$110B in annual operating profit that they could plow into AI if they so wished;</p></li><li><p>~180,000 employees including some of the very best and brightest machine learning researchers and engineers on the planet;</p></li><li><p>A truly massive amount of user behavior and ad performance data;</p></li><li><p>Endless weird <a href="https://english.elpais.com/science-tech/2025-06-24/spanish-mathematician-javier-gomez-serrano-and-google-deepmind-team-up-to-solve-the-navier-stokes-million-dollar-problem.html">dark horse</a> projects that aren&#8217;t even on the public radar right now.</p></li></ul><p>Don&#8217;t be distracted by existing revenue or product-in-market. The more you think about Google&#8217;s structural advantage in AI, the more staggering it is.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> They own the whole vertical stack required to win.</p><p>The full strength of this competitive advantage against Anthropic, OpenAI and others is yet to become apparent: <strong>where other firms top out, Google can keep pushing</strong>. Right now, the big AI labs are all focused on making better use of their not-fully-exhausted resources in terms of data, capital, and compute. Therefore, model performance is pretty competitive, and the perceived market leader switches every few months. But eventually, these firms will fully saturate the data, capital, or compute available to them. <strong>And however much they may have, Google has a lot more.</strong> Similar to how Mistral, Cohere, and others once looked competitive and then couldn&#8217;t keep up against superior resources, the same fate may play out at much larger scale &#8212; companies worth tens or even hundreds of billions of dollars exhaust their resources while Google&#8217;s products and distribution keep improving.</p><p>For the last few weeks, Meta has given us a taste of what it means for a trillion-dollar company with conviction to flex its weight: raiding competing labs to the point that OpenAI <a href="https://futurism.com/openai-shutting-down-week">shut down for a week</a>. Google has barely begun seriously competing; the world will look different when it does.</p><h3>Part 3: Winning the AI Race</h3><p>What does it mean to <em>win </em>the race to AGI? There are three important parts to it:</p><ol><li><p><strong>Distribution</strong>: AGI replaces white-collar labor, which means that many services and products will become commoditized. Commodities differentiate themselves by the quality and efficiency of their distribution (marketing). Without a doubt, Google has the greatest distribution machine in the world. <br><br>Today, companies that offer commodity products figure out their gross margins, decide how much money to give up,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> give it to Google, and Google provides them some number of clicks, hopefully yielding a positive return to the company. The more lopsided the importance of distribution, the more margin gets reallocated to Google. There&#8217;s an argument that <em>marketing is the final industry</em> &#8212; human attention the final scarce resource &#8212; and Google is not just the leader, but has a powerful structural network effect and moat. That&#8217;s a winning position.</p></li><li><p><strong>Verticalization: </strong>one of the biggest unsettled questions in AI is <em>to whom the profits will accrue</em>. At every level of the stack, there is margin uncertainty: for example, do profits accrue at the application level? Or will applications be commoditized, and the pricing power will rest with the fundamental model providers? Or with the chipmakers? <em>Who&#8217;s going to squeeze whom? <br><br></em>In light of this, I&#8217;m inclined to bet on the player that can truly own the full vertical, from chips to end users, rather than on players in never-ending battle with their partners over margin and control. This verticalization doesn&#8217;t just provide greater financial safety, but also superior efficiency from coordinated economies of scale than any vertical stacking of patchwork competitor products.</p></li><li><p><strong>End Products: </strong>this is the one that people are really missing. In <em>Distribution</em> we spoke about the importance of Google&#8217;s distribution machine in a world of commoditized products and services. But if we assume anything close to AGI, then Google shouldn&#8217;t collect a fee for making a referral to a third party service provider: AI enables Google to simply provide that service themselves.<br><br>For example, if you Google for a hotel today, you encounter a rich cascade of middlemen and economic interactions: you are served an ad to a booking website, which in turn serves ads to numerous hotels, you look through them, you click through screens of primitive upsells for car rentals, trip insurance, etc.<br><br>But nobody <em>wants</em> to browse all these websites and rifle through all the options.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> Google&#8217;s AI will obviate this experience. It already knows all your preferences from years of GMail and Search data: it will pick the optimal hotel, discuss with you the services you&#8217;re most likely to want, cut out all the middlemen and collect the fees directly. Talk about <em><a href="https://druriley.com/platform-risk/">platform risk</a></em> &#8212; many highly profitable products and services have been built on Google&#8217;s distribution platform, but as AI advances, Google will launch their own, superior offerings, and eat those markets overnight. People are already saying that ChatGPT competes with lawyers in the margins: reflect on what AGI really means, then take that to its logical conclusion. </p></li></ol><p>As we get closer to AGI, trillions of dollars of revenue &#8212; every white-collar professional service,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> every software product &#8212; will be up for grabs, and Google owns distribution. <strong>If you&#8217;re a Google exec and you don&#8217;t have a vision for Google providing most of the world&#8217;s digital services by 2035, then you&#8217;re not being ambitious enough.</strong></p><p>Importantly, I&#8217;m not suggesting that being first to AGI is all that matters. There&#8217;s a ton of value unlock along the way. For example, as coding models and computer use models keep improving, they will perform valuable labor at scale, presenting trillion-dollar revenue opportunities even before AGI.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> If Google has the AI capabilities to seize those opportunities, owns distribution, and owns the full vertical stack such that they have economies of scale and are totally independent of other players, then that seems like a clear winning position. AGI would only make the position stronger yet.</p><h3>Part 4: The Challenge</h3><p>Given all this great potential, <em>why is Google</em> <em>not winning more</em>? Why is the adoption of Gemini models so limited? How is ChatGPT taking market share from Google Search? Why are investors not more bullish? From my outsider&#8217;s perspective: </p><ul><li><p>Internal product ownership is far too fragmented. Different teams own poorly  divided parts of the same product. </p><ul><li><p>Former Googlers have told me that while they have all the resources, the organizational structure makes it far too hard to ship great products.</p></li><li><p>Therefore, large products have incoherent, low-quality user experiences;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a></p></li><li><p>Internal rivalries lead to sub-par, team-over-company outcomes;</p></li></ul></li><li><p>Lawyer-driven-development: excessive fear and caution around <em>launching products</em> due to operating in so many geographies, and AI having unpredictable outputs;</p></li><li><p>Managing for near-term shareholder outcomes creates a demand for caution not to jeopardize search;</p></li></ul><p>What&#8217;s missing is <strong>courage</strong>. The past few years have been enormously rewarding to Mark Zuckerberg, Sam Altman, and Elon Musk &#8212; these Nietzschean characters with tremendous will to power, who will bet big and hard and take huge risks with asymmetric payoff no matter the scale. These are wartime CEOs, true live players who will reconfigure reality around themselves and who would not hesitate to fight their competitors to death in hand-to-hand combat if necessary. </p><p>First and foremost, Sundar must pick up the wartime mantle and act far more aggressively. It is time to truly compete, and the recent quasi-acquisition of Windsurf is a good first step. Beyond that, there are four key steps:</p><ol><li><p><strong>Google&#8217;s <a href="https://gemini.google.com/">Gemini</a></strong> needs to be front-and-center. It needs to be on the google.com homepage, perhaps where Google Images is now.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> This needs to launch <em>as soon as possible</em>, no matter what the armies of internal worrywarts say.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> Google must push, push, push Gemini to supplant ChatGPT.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a></p></li><li><p><strong>Google must reorganize to enable itself to ship great AI software quickly.</strong> Too many disparate teams are pulled in to work in loose concert, so speed and overall quality suffers. One approach would be to take the best and brightest, fully mirror the OpenAI structure internally, and then aggressively keep growing that team, drawing talent from other divisions as it succeeds.<br><br>This is hard! There is lots of corporate inertia against it. And I suspect it is particularly challenging for Sundar because he rose through Google&#8217;s complex political culture &#8212; and now he must smash and reorganize the structure that once enabled him to succeed. That&#8217;s hard on many levels. But it is necessary.</p></li><li><p><strong>The mandate must be explicit and come from the top.</strong> The Google bears are fundamentally right that <em>Search over a Big Corpus of Hyperlinks </em>has an expiration date on it. Larry and Sergey were wise to retain super-voting shares &#8212; when push comes to shove, they can do whatever they want. They now need to exercise their legal and moral authority as founders to turn the ship.</p></li><li><p><strong>Brace the shareholders. </strong>Google&#8217;s near-term financial results need to take a backseat to winning the AI race. This is a very simple priority as a matter of expected value. If Search traffic dips, if CapEx gets expensive, if operating profit temporarily shrinks &#8212; none of these things matter given the stakes of the game. (And losing the game is a much worse outcome.)</p></li></ol><p>I&#8217;m long Google because I believe that these four steps are readily attainable<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> and will unlock trillions of dollars in value. I don&#8217;t know how quickly Google will get there, but its momentum and resources are so great that they <em>should</em>, even if there&#8217;s some near-term bleeding. <strong>Google&#8217;s position is so favorable that they would have to mess it up immensely not to win.</strong></p><h3>Conclusion</h3><p>If you believe that AI will be everywhere, then you should bet on the player that already is everywhere. Google is the winner-by-default in this arena,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> and the moment this starts to crystallize, the public markets will react. Surely there may be near-term volatility,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a> and the task ahead of reconfiguring the company around AI requires bold and uncomfortable action, but the long term looks good:</p><ol><li><p>Google has everything required to win;</p></li><li><p>They just have to not mess it up;</p></li><li><p>Winning unlocks trillions of dollars in revenue opportunity;</p></li><li><p>Winning provides a moat via a positive feedback loop.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a></p></li></ol><p>Right now at $2.1T, none of this is priced in.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a> Many investors, when looking at particularly large assets, will implicitly believe that they&#8217;re fully valued. It&#8217;s hard to believe that a trillion-dollar-thing, with so many smart analysts looking at it, is actually <em>undervalued</em> &#8212; it doesn&#8217;t pattern-match how we think of a bargain, the small diamond in the rough. And it feels a little crazy to suggest it could be 10x or 20x larger.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a> But I&#8217;ve seen this play out enough times in my life to know that it happens. We live in an era of returns to scale, and software continues to eat the world. Progress in AI remains rapid, and the economic consequences are, in some respects, simple and undeniable. The prize ahead is the most valuable one there has ever been, and the public has not yet fully internalized this. The race to AGI is the greatest competition of our lifetimes. Good luck to all!</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>In alphabetical order, thanks to Anon, Archie, Chris, Coby, John, and Paul for discussions over the year-and-a-half that led to this piece.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>By this I mean that I don&#8217;t really have good intuition for the advertising industry as a whole, and while I recognize that Google has set up a valuable and powerful system of network effects, the exact mechanics of them has always been somewhat opaque to me. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This article is already thousands of words without touching on Waymo, so and Waymo&#8217;s $1T-or-not is marginal to my overall point, so I&#8217;ll just put the argument in this footnote: </p><ol><li><p>The self-driving tech is here now. This is no longer speculative. It is clear that all driving will become self-driving in the next decade or so. Self-driving cars are magical, and once somebody tries one, there&#8217;s no going back.</p></li><li><p>Most of Waymo&#8217;s competitors have fallen off, leaving the field wide open for Waymo to  lead. Tesla is next-closest, but its self-driving technology still seems <a href="https://www.reuters.com/business/autos-transportation/teslas-robotaxi-peppered-with-driving-mistakes-texas-tests-2025-06-25/">less mature</a> than Waymo&#8217;s, and its regulatory positioning is also still much earlier. </p></li><li><p>It would <a href="https://electrek.co/2025/05/05/waymo-plans-to-more-than-double-its-self-driving-i-pace-fleet-within-the-next-year/">take a while</a> for Waymo to roll out a large fleet of their own, but I expect Waymo will partner with automakers <a href="https://waymo.com/blog/2025/04/waymo-and-toyota-outline-strategic-partnership">like Toyota</a> to give their cars self-driving capabilities. That can roll out rapidly, and capture all the margin as software revenue. Importantly, Tesla would have a much harder time doing this, because it would compromise/conflict with their existing auto manufacturing business. </p></li><li><p>Waymo as the first mover has a reasonable chance of being the market leader, and  Uber is currently valued at $200B while having <a href="https://www.urbanismnext.org/resources/how-much-traffic-do-uber-and-lyft-cause#:~:text=Key%20findings&amp;text=The%20new%20findings%20show%20that,percent%20of%20all%20vehicle%2Dmiles.">barely even put a dent</a> in the transportation market as a whole. It follows that the bull case for Waymo is a full order of magnitude larger.</p></li></ol></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Any mention of ChatGPT outperforming Google Search deserves a note about Google&#8217;s deliberate multi-year kneecapping of <a href="https://www.wheresyoured.at/the-men-who-killed-google/">search result quality</a>. For the linked article, I think Ed Zitron is otherwise a bad pundit and wrong about lots of things, but this one seemed fine. (Perhaps this is subject to some <a href="https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect">Gell-Mann Amnesia</a>.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Credit to <a href="https://x.com/jerrycap/status/1940922809653539030?t=P_1qAPWxIad-zsBCUF62qw">JerryCap on Twitter</a>, who pointed this out. I liked his note that Google&#8217;s TAM is expanding much faster than ChatGPT can take market share.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Early press comms from OpenAI and Perplexity, who are looking to monetize via this angle, are also giving this impression. The suggestion is not just that they would compete with Google, but that they can monetize <em>even better</em>. If true, that&#8217;s very bullish for Google.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Skeptical readers may point out that these &#8220;most trusted brand&#8221; surveys are going to be fuzzy and unreliable. Fine, but the overarching point still stands: regardless of whether it&#8217;s #3 most trusted and #1 most recognized or whatever, they have an <em>extremely</em> powerful consumer brand.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>At the time of writing, Gemini-Pro-2.5 is #1 for Text, WebDev, Vision, and Search. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>The most recent datapoint on the size of Google Book is that by 2019, it had <a href="https://www.blog.google/products/search/15-years-google-books/">apparently</a> scanned in 40 million books in 400 languages. By comparison, the Library of Congress carried <a href="https://www.britannica.com/topic/Library-of-Congress">around</a> 25 million books at the time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>And GMail&#8217;s penetration in terms of <em>data </em>is much larger still: keep in mind that even when someone who doesn&#8217;t use GMail sends email to someone who does, Google gets a record of that communication.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Sources here are a little suspect, but the fact that Google Drive/Docs/Sheets is mostly free and Microsoft Office mostly isn&#8217;t makes a huge difference.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>This gets a little less reliable since this is not clearly broken out in their public filings, but you broadly have the following datapoints:</p><ul><li><p>$10-15B/year in CapEx for 2006-2019 (~$140-210B total)</p></li><li><p>$30B/year in CapEx for 2019-2024 (~$150B total)</p></li><li><p>Analysts usually estimate that 70-80% of Google&#8217;s CapEx goes into data centers and infrastructure</p></li></ul><p>Taking 70-80% of $290-360B yields a $203-288B range.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>As a fun thought experiment, consider this: if Thrive and SoftBank value OpenAI at $300B &#8212; and that&#8217;s while relying on third-party chips and data centers, building out their own brand and distribution, needing to raise many more tens of billions of dollars &#8212; by that standard, what&#8217;s a fair valuation for <em>Google&#8217;s position alone</em>, in expected value terms? A few hundred billion dollars? A trillion? If that seems too high, revisit the list of what Google has, look at OpenAI&#8217;s public roadmap, and let it sink in.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Of course, this is in zero-sum competition with other companies offering the same competitive product. This means that the amount of money that the company will have to give up rises steadily over time, slowly approaching 100% of the firm&#8217;s gross margins. This is the true genius of Google Ads&#8217; model of selling black-box ROAS.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>So far, I think it&#8217;s clear that people prefer the ChatGPT-style experience instead of browsing ugly, clunky websites plastered with ads.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>The global sum revenue of the major white-collar professional services (legal, consulting, accounting, etc.) is around $7.0 - $7.5T annually.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>This is partially important to note because definitions of AGI are so fuzzy. I&#8217;m pretty sure that ten years ago, people would&#8217;ve called what we have today AGI, and I suspect that people will keep finding reasons to not call current-generation AI AGI. The goalposts move as our understanding deepens.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>Let me vent: just look at this! On Google&#8217;s homepage, there&#8217;s no way to get to a ChatGPT-style interface. However, in the top-left corner, there&#8217;s a little icon for their Search Labs. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!DesV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!DesV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 424w, https://substackcdn.com/image/fetch/$s_!DesV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 848w, https://substackcdn.com/image/fetch/$s_!DesV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 1272w, https://substackcdn.com/image/fetch/$s_!DesV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!DesV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png" width="344" height="241.29601357082274" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:827,&quot;width&quot;:1179,&quot;resizeWidth&quot;:344,&quot;bytes&quot;:95360,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://loeber.substack.com/i/166998298?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!DesV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 424w, https://substackcdn.com/image/fetch/$s_!DesV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 848w, https://substackcdn.com/image/fetch/$s_!DesV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 1272w, https://substackcdn.com/image/fetch/$s_!DesV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cae8518-0720-4b3e-b818-2d1318a13154_1179x827.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What happens if I click on it?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Yp5q!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Yp5q!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 424w, https://substackcdn.com/image/fetch/$s_!Yp5q!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 848w, https://substackcdn.com/image/fetch/$s_!Yp5q!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 1272w, https://substackcdn.com/image/fetch/$s_!Yp5q!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Yp5q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png" width="354" height="431.16539440203564" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1436,&quot;width&quot;:1179,&quot;resizeWidth&quot;:354,&quot;bytes&quot;:256701,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://loeber.substack.com/i/166998298?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Yp5q!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 424w, https://substackcdn.com/image/fetch/$s_!Yp5q!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 848w, https://substackcdn.com/image/fetch/$s_!Yp5q!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 1272w, https://substackcdn.com/image/fetch/$s_!Yp5q!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F664df0b1-c52f-4866-b7f9-0ecf53dc1bed_1179x1436.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#65279;There are pre-seed companies that wouldn&#8217;t make this mistake! If you&#8217;re going to do user feature-flagging, then hide the button based on feature-flag status, rather than showing the button, letting the user click it, and then get to a disappointing &#8220;sorry!&#8221; screen. I&#8217;m sure that hundreds of thousands of people are hitting this every day. This is some of the most valuable screen real estate on the planet. It is unbelievable that it is wasted like this. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Moving Google Images back a little bit in terms of prominence may be a good thing: Google Images has degraded severely in quality over the past few years. As far as I can tell, that&#8217;s because of SEO spam polluting the search space. It&#8217;s much harder for me to find an appropriate image on Google Images now than it was ten years ago.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>The &#8220;AI Mode&#8221; product that is being piloted with certain users is a good first step.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>Maybe this will look similar to how Microsoft has used its whole Office/Teams suite to squeeze Slack out of its market.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>I always like investing when the overall sentiment is crowding out the facts: when people are bearish but there is only a <em>very small number of things that need to go right</em>, perhaps take the bet that those things can go right! People like to imagine that mismanagement is boundless, but it&#8217;s corrigible. I moved back to San Francisco with a similar thesis: sure, the city has problems, but all of them boil down to one or two simple governance issues, which are totally fixable. This bet has worked out pretty well over the past few months.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>The other dark horse here is Microsoft, which also has enormous distribution and <em>could</em> stand up world-class AI capabilities overnight. Meta is trying, but their distribution is much less suitable.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>Some people think that Search volumes may decrease, and the public markets might get scared. Perhaps there&#8217;s a 30% stock price dip on the horizon; who knows. This kind of thing is difficult to time. My view is that I&#8217;m happy to ride it out, and if the price dips, I&#8217;ll probably double down on my position, barring no changes to this thesis. Even aside from AI, I think the durability and stickiness of Google&#8217;s offerings is underrated.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p>I didn&#8217;t discuss this elsewhere: the positive feedback loop is usage. Every time someone uses Google&#8217;s AI products, they should improve by virtue of having more data, and it also reinforces distribution: the user is going to Google, having a fine experience, and will therefore go to Google next time as well. Being the first port of call is very sticky.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p>Google&#8217;s market cap is about $2.1T, of which ~$100B is cash-on-hand, and <a href="https://variety.com/2025/digital/news/youtube-valuation-worth-550-billion-analysts-1236352586/">$500B is YouTube</a>. The remaining $1.5T seems to be valued almost entirely on Google&#8217;s Search business, which mostly discounts all the other assets. (Remember our thought experiment from footnote 12: what&#8217;s the strategic position for AI worth just on its own?) And while it&#8217;s hard to value <a href="https://en.wikipedia.org/wiki/Calico_(company)">Calico</a>, <a href="https://en.wikipedia.org/wiki/Wing_Aviation">Wing</a>, and the many other subsidiaries, it&#8217;s clear that Waymo&#8217;s <a href="https://www.investors.com/news/technology/google-stock-valuation-waymo-robotaxi-leader/">$45B valuation</a> will increase significantly from here. <br><br>Semi-related, Waymo and Search are somewhat marginal to my bull-case scenario, but they do provide some downside protection and contribute in expected-value terms. The &#8220;worst-case&#8221; scenario that I can envision is that Google doesn&#8217;t get there quickly enough, but the momentum of Search would still sustain some revenue growth such that a medium-term collapse in stock price seems unlikely to me. I think this is an asymmetic bet. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p>Forecasting at this level of scale gets a little fuzzy &#8212; we&#8217;re talking about AGI-empowered Google substantially eating up software and professional services. Nobody knows what revenue multiples or margin compression would like. Further, if we start to seriously automate big chunks of the labor economy, then lots of dynamics will change in weird ways, including e.g. the value of money itself, but that&#8217;s neither here nor there for the purposes of this post.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#26: Bitcoin Without a Fight]]></title><description><![CDATA[Dial the clock back by thirteen years. Bitcoin is trading at five dollars a coin. You ask me &#8220;what will the world look like when Bitcoin is at $100,000?&#8221;]]></description><link>https://essays.johnloeber.com/p/26-bitcoin-without-a-fight</link><guid isPermaLink="false">https://essays.johnloeber.com/p/26-bitcoin-without-a-fight</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 09 Mar 2025 20:23:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d956a6d8-3def-48c5-aa3a-f8c2a80dea23_1536x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dial the clock back by thirteen years. It&#8217;s the spring of 2012. Bitcoin is trading at five dollars a coin. You ask me &#8220;what would the world look like at $100,000 a coin?&#8221; When Bitcoin is 20,000 times more valuable? It&#8217;s unimaginable. Maybe the financial systems of some small countries have failed, and been taken over completely by Bitcoin. Surely it has become a popular currency. You ask me &#8220;what would the world look like when the US Government officially starts keeping a Bitcoin reserve?&#8221; Again, it&#8217;s unimaginable. Surely there&#8217;s been a big fight? A great showdown between Bitcoin and the almighty dollar? And somehow, scrappy Bitcoiners came out on top?</p><p>This week, the President of the United States has promised to make America the &#8220;<a href="https://www.nytimes.com/2025/03/07/technology/trump-crypto-summit.html">Bitcoin superpower of the world</a>&#8221; and established a Strategic Reserve that is never to be sold. I&#8217;ve been thinking a lot about this, not just because it&#8217;s amazing news for Bitcoiners, but because the world has turned out surprisingly differently from what Bitcoiners expected ten or fifteen years ago.</p><h3>Setting the Scene</h3><p>In the late 2000s, American institutional power crested an all-time high. The country rallied around the flag after 9/11, the government expanded significantly, and wars in Iraq and Afghanistan made clear that America would defend its interests, no matter what. The Subprime Mortgage Crisis of 2008, resolved by a <a href="https://en.wikipedia.org/wiki/Emergency_Economic_Stabilization_Act_of_2008">$700B+ bailout for banks</a>, once more drove the message home: true free-market economics be damned, American institutions must survive. </p><p>But the Subprime Mortgage Crisis showed millions of Americans that the institutions had, in some way, failed. And few people were satisfied by how it resolved: on the left, people thought the banking and mortgage industries hadn&#8217;t been punished enough, and on the right, people thought that the government had failed to uphold the free market. <a href="https://en.wikipedia.org/wiki/Occupy_Wall_Street">Occupy Wall Street</a> on one hand, and <a href="https://en.wikipedia.org/wiki/End_the_Fed">End the Fed</a> on the other.</p><h3>Bitcoin as Political Instrument</h3><p>Bitcoin launched into this uniquely charged political zeitgeist, right around the very peak of the financial crisis. Many people felt screwed by the system. The idea of using Bitcoin for savings and financial transactions &#8212; <em>be your own bank!</em> &#8212; sounded like a cool way of sticking it to the banks. But Bitcoin went much further: giving people trustless, decentralized money was a great way of sticking it to the <em><a href="https://en.wikipedia.org/wiki/Central_bank">central banks</a></em>. Many early Bitcoiners saw a vision, far in the distance: if you can separate <em>money</em> from the <em>state</em>, then that takes away a core power of the state. </p><p>Satoshi himself was careful to be politically neutral, though his libertarian, <a href="https://en.wikipedia.org/wiki/Cypherpunk">cypherpunk</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> leanings <a href="https://news.bitcoin.com/satoshi-revolution-chapter-2-satoshi-libertarian-anarchist-part-4/">shone through</a>. But practically, many early adopters had very strong views on (monetary) politics: <a href="https://bitcointalk.org/index.php">Bitcointalk</a> in those days had a lot of discussion about Ending the Fed, anarcho-capitalism, and generally pushing back on the state in favor of individual privacy and liberty. Just by being a powerful tool for political causes, Bitcoin was necessarily political.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://essays.johnloeber.com/subscribe?"><span>Subscribe now</span></a></p><h3>Surely Governments Will Try to Kill It?</h3><p>In the long term, weakening the monetary autonomy of states is a tremendous threat to them. Back then, I wondered if Bitcoin could survive: in those days of strong American institutions, it seemed only like a matter of time until the <a href="https://en.wikipedia.org/wiki/Statism">statists</a> in charge would nip this little experiment in the bud.</p><p>In May 2013, the US killed <a href="https://en.wikipedia.org/wiki/Liberty_Reserve">Liberty Reserve</a>, an offshore centralized digital currency service &#8212; imagine PayPal without KYC/AML.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> The CEO got 20 years in prison for money laundering. At the same time, governments everywhere seemed to be slowly cracking down on <em>cash (that they issued!)</em> just because it&#8217;s not traceable. </p><p>Those were ominous times. After the big <a href="https://en.wikipedia.org/wiki/Mt._Gox">MtGox crash</a> of early 2014, the price went sideways for three years. High-profile <a href="https://plan99.net/~mike/index.html">early Bitcoiners</a> quit. Simple legal questions, like whether a Bitcoin wallet operator was a <a href="https://en.wikipedia.org/wiki/Money_transmitter">money transmitter</a>, were unresolved. The US financial system is so rife with opaque regulations that many early Bitcoiners were worried that they were doing something that would one day be deemed illegal.</p><p>Even years later in 2017 and 2018, I remember speaking with people who were sincerely worried about one day being persecuted for their Bitcoin evangelism. They saw the totality of what large-scale Bitcoin adoption implied, and the challenge it would pose for governments and their fiat currencies. It was a common view that there was a Big Fight ahead, that one day the regulators would wake up from their slumber and fire every weapon they had. </p><h3>Surprise!</h3><p>The Big Fight never came. Regulators slowly clarified the important questions, and the law turned out to be favorable. Cryptocurrencies became mainstream via several hype cycles of ICOs, NFTs, and memecoins. Those hype cycles got a lot of people interested, and created a &#8220;pro crypto&#8221; political constituency. Most of these people had very little in common with the Bitcoiners of 2012, but that didn&#8217;t matter &#8212; they thought the coins were cool,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> and they had the right to vote. </p><p>As crypto became a voting bloc, the Big Fight became more and more unlikely: shutting it all down would really require bipartisan consensus, but that wasn&#8217;t going to happen &#8212; if one party rejected this constituency of voters, the other party would welcome them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The Biden administration rejected crypto voters, pressured the industry under <a href="https://financialservices.house.gov/news/documentsingle.aspx?DocumentID=409457">Operation Chokepoint 2.0</a>, but Trump took the crypto voters as allies, and today we have a Bitcoin Strategic Reserve. </p><h3>American Institutions Have Changed</h3><p>But the success of Bitcoin is not just about voting bloc dynamics. Over the last fifteen years, the formerly strong American institutions have also pulled back significantly, and created an opening of permissibility for Bitcoin to succeed. Things have changed:</p><ul><li><p><strong>Covid</strong>: the Fed pulled out all the stops, printed 3 trillion dollars, and caused significant inflation. If they&#8217;re willing to do that, then how sacred is the dollar?</p></li><li><p><strong>Defense</strong>: the US has pulled out of its wars, and pursued isolationist foreign policy.</p></li><li><p><strong>Government</strong>: there is widespread dissatisfaction among Americans with their leaders and institutions. Donald Trump has been elected President twice on the articulation of that dissatisfaction, and the promise to shake up the system.</p></li><li><p><strong>Trust</strong>: the <a href="https://news.gallup.com/poll/508169/historically-low-faith-institutions-continues.aspx">trust that Americans have in their institutions and authority figures</a> is at an all-time low. People don&#8217;t trust their doctors or schools, let alone banks.</p></li></ul><p>The last decade has been a time of relatively weaker institutions. Donald Trump&#8217;s promise in his first term was to <em>drain the swamp</em> and shake up Washington. Biden&#8217;s term didn&#8217;t change American institutions much one way or another, and Trump&#8217;s second term so far has been characterized by mass firings and dissolution of certain government agencies.</p><p>Today, there isn&#8217;t as much willpower to defend the institutions as there once was. If Bitcoin had gone mainstream in 2003, the Bush administration would&#8217;ve probably tried to nuke it just like Liberty Reserve. But Bush-era conservatives would be surprised by politics today in general. Imagine a cabinet official in 2004 openly talking about the dollar losing reserve currency status and hedging against it!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!InLL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!InLL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 424w, https://substackcdn.com/image/fetch/$s_!InLL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 848w, https://substackcdn.com/image/fetch/$s_!InLL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 1272w, https://substackcdn.com/image/fetch/$s_!InLL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!InLL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png" width="904" height="397" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:397,&quot;width&quot;:904,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:94941,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://loeber.substack.com/i/137312852?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!InLL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 424w, https://substackcdn.com/image/fetch/$s_!InLL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 848w, https://substackcdn.com/image/fetch/$s_!InLL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 1272w, https://substackcdn.com/image/fetch/$s_!InLL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4230684e-18b2-42aa-95ca-e787bde09c51_904x397.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>Speculation De-Fanged Bitcoin</h3><p>Early on, there was a common belief that Bitcoin would be viewed as a threat, and banned in many countries. It would be a slow grind for expansion over time, <a href="https://x.com/lopp/status/1898371677882355775">working its way up</a> from overtaking the weakest financial systems. But in reality, <a href="https://en.wikipedia.org/wiki/Legality_of_cryptocurrency_by_country_or_territory">almost no country</a> has truly banned it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> To the surprise of many early Bitcoiners, governments mostly didn&#8217;t care.</p><p>What had happened was that all the speculators came in, and changed the tone. They made it much easier for governments to get comfortable with it. Early Bitcoiners were talking about buying groceries with Bitcoin, using Bitcoin ATMs, etc., framing it as an immediate fiat replacement.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> But people buying memecoins, or even saving Bitcoin as &#8220;digital gold&#8221; aren&#8217;t threatening the dollar in the same way. </p><p>If the vibe of crypto had stayed like the early days &#8212; people talking about being their own bank and separating money and state &#8212; it might have faced more scrutiny.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> But instead, the crypto community was overrun by people saying things like &#8220;my dentist bought a Lambo after making 100x on Dentist Coin&#8221;. This apolitical<em> </em>appearance of harmless get-rich-quick-schemes is much easier for governments to ignore.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Ironically, this made it easier for Bitcoin to succeed as a political project long-term.</p><h3>The Fork in the Road</h3><p>Looking back, probably the most important thing for Bitcoin was that Donald Trump won in 2016, and Hillary Clinton lost. Clinton was a serious statist who would&#8217;ve taken a hard-line pro-American-institution position on every issue. Donald Trump, by contrast, was tapping into an electorate that wanted to shake up the institutions. </p><p>While Trump made some disparaging comments about crypto during his 2016-2020 term, his administration mostly left crypto alone. By contrast, Clinton&#8217;s views on crypto remind me more of Elizabeth Warren&#8217;s views, who has been one of crypto&#8217;s fiercest critics, always pushing legislation to hobble the crypto markets. </p><p>2016 was probably the last time that it was really possible to kill Bitcoin. It ended the year at ~$800 a coin, a $15B market cap. It was small, hadn&#8217;t run through a public hype cycle, and didn&#8217;t have many investors who would stand up for it. A serious attack from Clinton-Warren-Gensler types, something on the order of Operation Chokepoint 2.0, could&#8217;ve killed Bitcoin adoption in the US, and maybe even pushed some international consensus among G8 Countries.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> </p><h3>Where Bitcoin Goes from Here</h3><p>Bitcoin&#8217;s position finally seems secure. At long last, its <a href="https://coinmarketcap.com/charts/bitcoin-dominance/">market cap dominance</a> has been steadily growing. There&#8217;s no longer the risk of getting &#8220;flippened&#8221; by higher-tech alternatives; the positive feedback loop of <em>confidence in Bitcoin</em> is winning. Regulatory uncertainties have been blown away, and Bitcoin is now positioned favorably to and distinct from all other cryptocurrencies.</p><p>It&#8217;s probably too big to fail now. It has institutional support at the highest levels, and will grow into an even stronger position by the end of the Trump presidency. Four years from now, it will be too deeply intertwined with the financial system and political interests to undo. The possibility of the eventual Big Fight now seems remote.</p><p>In some ways, it looks like an easy trade. Bitcoin is &#8220;expensive&#8221; today relative to the past, but it&#8217;s far more de-risked than it has ever been before. For once, the future is somewhat predictable. It seems like all the obstacles have been cleared, and it&#8217;s a straight shot for digital gold to overtake physical gold &#8212; that&#8217;s a 12x from here.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><h3>Reflections</h3><ul><li><p>When I was looking at Bitcoin early on, I was 19 or 20. I viewed the positions of American institutions as <em>fixed,</em> because they were all I had ever seen. It turned out these positions weren&#8217;t fixed at all. Over the course of ten years, things can actually <em>really change</em>. If I had been a little older and seen more change, maybe I would have understood this at the time.  This is my biggest lesson.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p></li><li><p>Early Bitcoiners viewed the state as a bipartisan, self-interested monolith, that they would eventually conflict with. It turned out that the state is not monolithic, and that a controversial idea can succeed when it becomes a wedge issue, with different parts of the state on either side of it.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> Technologists stereotypically have little patience for the political process, but it turned out to work fine.</p></li><li><p>Bitcoin never had to go underground. Bitcoin&#8217;s decentralization never really got tested in practice &#8212; most activity is still via exchanges. Perhaps just having a credible defense is good enough to never see it get stressed? People say &#8220;you can&#8217;t ban bitcoin, it&#8217;s decentralized&#8221; and so nobody really ever tries.</p></li><li><p>Speculators not only brought massive amounts of capital to the ecosystem, but also took the political edge off its appearance, without compromising the long-term vision. Early Bitcoiners worried about many obstacles to adoption: <em>is it not private enough? Will it be illegal? What if people don&#8217;t want to be their own banks? </em>But almost everyone underestimated how easily Bitcoin could go mainstream: give people a way to get rich, and they will move mountains for you.</p></li></ul><p>To a meaningful extent, Bitcoiners have won. And it all happened without a fight. The chance of getting to this point seemed <em>so small </em>thirteen years ago. But connecting the dots looking backwards, the simple market dynamics at work did not leave much up to chance. Perhaps the odds were always very good. The power and elegance of a system bootstrapping its own value just from provable scarcity seems to have been, and perhaps still is, undervalued and misunderstood.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>In the 1980s, technologists noticed that personal computers made strong encryption available to ordinary people, and that encryption could be used to preserve their privacy, and thereby their liberty. These were the <a href="https://en.wikipedia.org/wiki/Cypherpunk">cypherpunks</a>, and their political activism was instrumental in creating the free and open internet we know today.</p><p>Strong encryption underpins the free and open internet. However, strong encryption was historically a military technology. It was on the list of <a href="https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States">controlled exports</a>, which created legal hurdles to deploying it on the internet. Early cypherpunks filed a number of lawsuits and publicity stunts that led to strong encryption slowly becoming <em>legally available</em> to consumers. This took a long time: it was 1996 by the time that Bill Clinton signed the executive order to formally remove commercial encryption technology from the munitions export list. Had the cypherpunks not put in the legal legwork, the world might look very different &#8212; and in my view, worse &#8212; today.</p><p>Beyond freedom of speech, the cypherpunks turned their eyes to the freedom to transact. They created early digital currencies like <a href="https://en.wikipedia.org/wiki/Ecash">ecash</a>, <a href="https://en.wikipedia.org/wiki/Wei_Dai#b-money">b-money</a>, and <a href="https://en.wikipedia.org/wiki/Nick_Szabo#Bit_gold">bit gold</a>. Bitcoin is a direct descendant of their early efforts.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>As a sign of how much has changed, you might say this is not too different from some shady crypto exchanges today that have lax or absent KYC/AML requirements.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>And they thought the coins would make them rich! (And for many of these early adopters, that&#8217;s how it turned out!) Never underestimate the political power of a strong financial incentive.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Early Bitcoiners recognized this game-theoretic dynamic would play out among nation-states &#8212; if one country is opposed to Bitcoin, another can benefit from permitting it &#8212; but it hadn&#8217;t dawned on me that the exact same dynamic would play out in US politics!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Not to mention that some of the recent tariff policy is so aggressive that <a href="https://x.com/hamandcheese/status/1886141069152108567">some commentators think</a> these are moves to prepare for de-dollarization. Again, it&#8217;s noteworthy to me how much the Overton window has shifted regarding the sanctity of the dollar.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>China is the big example where crypto has been nominally illegal for a decade, but it was simultaneously home to an absolutely massive crypto scene. This scene faced some crackdowns in late 2021, and has been smaller since, but it&#8217;s still a large market participant.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Back in the day, everyone knew the three-step process toward Bitcoin replacing fiat currencies:</p><ol><li><p>Store of value</p></li><li><p>Medium of exchange</p></li><li><p>Unit of account</p></li></ol><p>It seems that Bitcoiners overestimated how quickly this process would occur. People thought of <em>currency </em>as something actively used to transact, so the 2014 era was littered with Bitcoin ATM startups and other clunky attempts to transpose fiat use-cases into Bitcoin. It turned out that the <em>Store of value</em> step would take many years, but it&#8217;s also much less threatening to states: simply pitching Bitcoin as &#8220;digital gold&#8221; and not anything more, was the right move.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Another minor, but important &#8220;de-fanging&#8221; aspect is that Bitcoin doesn&#8217;t implement privacy. All activity is public and perfectly traceable, which is great for any state. There are some proposals and methods for adding more privacy to Bitcoin, but they&#8217;re a long way out. Again, a <a href="https://gwern.net/bitcoin-is-worse-is-better">worse-is-better</a> mechanic proved successful: privacy coins like Monero, ZCash, etc. have drawn much more scrutiny and are less available on major exchanges. It turned out that a compromised approach was much more effective for getting governments comfortable than a hardline libertarian-privacy technology from day one.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>More surprisingly, with each hype cycle there was a wave of crypto-related fraud, but little of which was ever prosecuted. Not only did governments not really perceive crypto as a threat to monetary autonomy, they cared so little that they didn&#8217;t bother going after any but the most brazen ICO scams. I suppose the government turning a &#8220;blind eye&#8221; goes both ways.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>By the time that the Biden administration ran OCP2.0, it was of course already too late. Bitcoin adoption was too widespread, and the network effect too powerful. When Bitcoin was very small in the early 2010s, it was probably possible to kill it just by hobbling adoption such that its network effect would never really start kicking in. Even then, <em>some</em> people would&#8217;ve still used Bitcoin. Perhaps adoption would&#8217;ve just been slower.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Bitcoin market cap at $1.7T, Gold market cap at ~$20T.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Cue the old adage that people overestimate what will change in two years, and underestimate what will change in ten years.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Perhaps a big lesson here is that interacting with &#8220;politicians&#8221; or &#8220;regulators&#8221; is not interacting with one large stereotyped group, but with individuals who are less ideological and more malleable in their views than the group as a whole.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#25: Nobody's Thinking Enough About AI]]></title><description><![CDATA[A very strange thing is happening.]]></description><link>https://essays.johnloeber.com/p/25-nobodys-thinking-enough-about</link><guid isPermaLink="false">https://essays.johnloeber.com/p/25-nobodys-thinking-enough-about</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 09 Feb 2025 22:40:01 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7af18361-1f5b-42c1-9971-cb79e278aff0_1335x1291.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A very strange thing is happening. Technology is getting wildly better, faster than ever before. And while people are excited about the new AI products in the headlines, almost nobody is willing to look a few years into the future and ask seriously what this means. The following things happened in just the last few weeks:</p><ul><li><p>President Trump announced <a href="https://openai.com/index/announcing-the-stargate-project/">Project Stargate</a>, a $500B investment in infrastructure for OpenAI.</p></li><li><p>Dario Amodei, founder of <a href="http://anthropic.com">Anthropic</a>, <a href="https://arstechnica.com/ai/2025/01/anthropic-chief-says-ai-could-surpass-almost-all-humans-at-almost-everything-shortly-after-2027/">claimed</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> we are two to three years out from Artificial Superintelligence.</p></li><li><p>OpenAI released <a href="https://openai.com/index/introducing-deep-research/">Deep Research</a>, which is capable of creating in-depth, graduate-level research reports &#8212; thousands of words &#8212; in minutes.</p></li><li><p>OpenAI released <a href="https://openai.com/index/introducing-operator/">Operator Mode</a>, a computer use model that can use your browser to execute tasks just as a person would.</p></li><li><p>DeepSeek released an open-source model that is roughly on par with OpenAI&#8217;s O1, and claimed (<a href="https://darioamodei.com/on-deepseek-and-export-controls">misleadingly</a><a href="https://fortune.com/2025/01/27/china-deepseek-ai-claims-true/">[2]</a><a href="https://www.taiwannews.com.tw/news/6030380">[3]</a>) that it cost only $6M to train.</p></li></ul><p>Many well-respected AI industry leaders are now expressing fairly short timelines: on the order of <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">artificial general intelligence</a> (AGI) within 5 years.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> These aren&#8217;t outlier views, either. On the technologist prediction market Metaculus, 1484 people have <a href="https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/">cast their bets</a>, and the median one is for May of 2030. The upper bound has been steadily coming down. &#8220;Later than 2038&#8221; is coming to be a fringe view. And of course, some think there&#8217;s a good chance we&#8217;ll get there next year.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!UfMv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!UfMv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 424w, https://substackcdn.com/image/fetch/$s_!UfMv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 848w, https://substackcdn.com/image/fetch/$s_!UfMv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 1272w, https://substackcdn.com/image/fetch/$s_!UfMv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!UfMv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png" width="1120" height="309" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:309,&quot;width&quot;:1120,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:44262,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!UfMv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 424w, https://substackcdn.com/image/fetch/$s_!UfMv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 848w, https://substackcdn.com/image/fetch/$s_!UfMv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 1272w, https://substackcdn.com/image/fetch/$s_!UfMv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe17cb1e0-791d-4e2f-b43f-9ba786f8c37a_1120x309.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Whether it&#8217;s five, ten, or fifteen years &#8212; take that seriously, and let it sink in for a moment. That&#8217;s soon. We might be just a few years out from sharing the planet with software that is infinitely replicable and can do the exact same mental tasks as you and I. The implications are immense. For a start, it&#8217;d be the biggest job turnover cycle in history. Even if the chance of this happening were <em>small</em> &#8212; not zero &#8212; that still seems worth thinking seriously about, in the same way that we prepare for other contingency events. But despite industry leaders being very clear about where things stand,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> there&#8217;s basically no public discussion about this. Not a peep from policymakers;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> the <a href="https://en.wikipedia.org/wiki/Overton_window">Overton window</a> isn&#8217;t there yet. This seems like a real miss.</p><p>This is a long article, so I&#8217;ll put the take-aways explicitly up front:</p><ol><li><p>Progress in AI is much faster than you think.</p></li><li><p>How to deal with AI-driven social and economic change is probably the <em>only public policy question that matters right now</em>.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://essays.johnloeber.com/subscribe?"><span>Subscribe now</span></a></p></li></ol><h3>Speed</h3><p>The first thing to recognize in AI progress is that it has already happened <em>much faster than people expected</em>. I remember very well the 2015-era hype cycle for machine learning:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> breakthroughs using <a href="https://en.wikipedia.org/wiki/AlexNet">neural networks</a> and <a href="https://en.wikipedia.org/wiki/Generative_adversarial_network">generative techniques</a> were finally returning rapid progress to a field that had stalled for years. But researchers still quoted AGI timelines for the year 2050 or 2100, with many shying away from saying that it&#8217;s possible at all.</p><p>At that time, the idea of AGI by 2030 would&#8217;ve been received as a little kooky. Now it is mainstream. Only famously-out-there-futurist Ray Kurzweil seems to have gotten it right: he <a href="https://longbets.org/1/">bet</a> that the Turing Test would be passed by 2030. There&#8217;s a <a href="https://www.metaculus.com/questions/3648/computer-passes-turing-test-by-2029/">prediction market on this</a>: in early 2020, bettors were pricing this at 20%. Now it&#8217;s at 80%.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>During this time, we&#8217;ve kept coming up with AI tests and benchmarks, blasting through them &#8212; attaining parity with human experts &#8212; and then coming up with new ones. You know you&#8217;re making headway toward AGI when it&#8217;s getting harder and harder to come up with tests that distinguish human from AI performance.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Ccyh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Ccyh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 424w, https://substackcdn.com/image/fetch/$s_!Ccyh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 848w, https://substackcdn.com/image/fetch/$s_!Ccyh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 1272w, https://substackcdn.com/image/fetch/$s_!Ccyh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Ccyh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png" width="1456" height="1339" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1339,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:508645,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Ccyh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 424w, https://substackcdn.com/image/fetch/$s_!Ccyh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 848w, https://substackcdn.com/image/fetch/$s_!Ccyh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 1272w, https://substackcdn.com/image/fetch/$s_!Ccyh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2418693a-28ec-4013-a7eb-16fabea7cc44_2084x1916.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There&#8217;s now the cutely-named <em><a href="https://agi.safe.ai/">Humanity&#8217;s Last Exam</a></em>, a collection of 3,000 difficult questions across a hundred fields. While I think the exponential plot of its performance improvements is misleading,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> the progress is undeniable:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lZV5!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lZV5!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lZV5!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lZV5!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lZV5!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lZV5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg" width="1456" height="1152" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1152,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:130910,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lZV5!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 424w, https://substackcdn.com/image/fetch/$s_!lZV5!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 848w, https://substackcdn.com/image/fetch/$s_!lZV5!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!lZV5!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8710c038-a154-40be-bb9c-4ec983801cb6_2222x1758.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What&#8217;s going on here is that progress toward AI follows an exponential growth curve.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> Unfortunately, even highly intelligent, technical people <em>have no intuition at all for exponential growth</em>. Most people&#8217;s imaginations are stuck in <em>linear growth</em> &#8212; and so, time and time again, people underestimate how quickly growth can compound.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!tYTP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!tYTP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 424w, https://substackcdn.com/image/fetch/$s_!tYTP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 848w, https://substackcdn.com/image/fetch/$s_!tYTP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 1272w, https://substackcdn.com/image/fetch/$s_!tYTP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!tYTP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png" width="1456" height="721" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:721,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:152561,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!tYTP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 424w, https://substackcdn.com/image/fetch/$s_!tYTP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 848w, https://substackcdn.com/image/fetch/$s_!tYTP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 1272w, https://substackcdn.com/image/fetch/$s_!tYTP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6d1cf122-b9b1-43bb-a359-76b86aa8e77f_1500x743.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://steemit.com/steem/@the-traveller/why-you-still-are-massively-underestimating-steemit-com-exponential-functions-you-probably-don-t-really-understand-them">Cartoon Source</a></figcaption></figure></div><h3>Why Aren&#8217;t People Talking About It?</h3><p>Though the headlines are everywhere, to most people they are still <em>abstract</em>. If they&#8217;re not using AI products, then they won&#8217;t develop intuition for those coming changes. Moreover, those changes may still feel far away because technology has historically been slow to percolate into the real world:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> plenty of things are still done manually even though software exists to do it. Even the internet took years to go mainstream, and many more years to start truly reconfiguring the world around itself.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> </p><p>But these may not be the right analogies. Change in the <em>physical world</em> comes slowly, where logistical constraints exist. But change in the <em>digital world &#8212; </em>where we all now spend key parts of our lives &#8212; can come very, very, quickly. </p><p>People already underestimate the speed and scope of present change because &#8220;AI&#8221; has been such a buzzword for over two years now.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> People aren&#8217;t noticing how much better these technologies have gotten during that time, because the form factor of interacting with them (i.e. for most people, the ChatGPT window) hasn&#8217;t changed. It still <em>feels</em> the same to the consumer,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> and the cycle of consumers being amazed by new technology to them totally taking it for granted is very short.</p><p>Finally, perhaps the very idea of AGI is still too &#8220;out there&#8221; for people to discuss. In a similar way, in most of the world the idea of a self-driving car is still space-age science fiction. But in San Francisco, people are taking self-driving cars <a href="https://loeber.substack.com/p/20-waymo-the-leapfrog">every single day</a>. The technology which always seemed forever away is finally arriving.</p><h3>It Gets Personal</h3><p>All these changes seem abstract and far-away until they knock on your door. It&#8217;s really hard to appreciate or develop intuition for this progress until you see software doing the tasks that you take pride in: then, suddenly, it&#8217;s crystal-clear. In the next few years, most knowledge workers will have an uncanny realization when they&#8217;re looking at a screen and saying &#8220;wait, this thing can do what I do.&#8221; </p><p>Everyone has their <a href="https://x.com/polynoamial/status/1881039073558806617">Lee Sedol moment</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a></p><p>I had mine recently.</p><h4>Deep Research</h4><p>I&#8217;ve spent a lot of time in my life doing research. To that end, OpenAI&#8217;s Deep Research is amazing. It&#8217;s basically a longer-running version of ChatGPT &#8212; instead of visiting one or two websites and returning a short answer in thirty seconds, it visits dozens of websites and returns an essay-length answer in ten minutes. Objectively, it does a great job.<em> </em>(Example research projects in the footnotes.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a>)</p><p>Deep Research hugely decreases the intellectual &#8220;activation energy&#8221; required to learn about something new: instead of having to muster the effort and spend many hours combing through resources to synthesize details, OpenAI can do it all for me. This is greatly empowering<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> &#8212; but of course, at the same time, clearly the clock is now ticking for lots of highly paid knowledge work. If we&#8217;ve gone from hallucinating-all-the-time early ChatGPT to this in just two years, where will be in another two?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> </p><h4>Cursor AI</h4><p>I build software. When I was nineteen years old, I decided to make that my primary intellectual and professional quest. Twelve years later, I&#8217;ve become quite good at it. I&#8217;ve made things that I am proud of, and in some cases have been used by millions of people. Yet my experiences using AI to write code have put the writing on the wall: I have probably already written the majority of code that I will write in my lifetime. The skill that I have spent many years cultivating will slowly but surely become antiquated.</p><p>I had previously tried writing software using ChatGPT or Claude as an assistant to generate and modify code for me. But using <a href="https://www.cursor.com/">Cursor</a> with Claude 3.5-Sonnet was a much better, more fluid experience. Like Deep Research, it dramatically decreased the &#8220;activation energy&#8221; for starting a new project, and it easily generated hundreds of lines of complex-but-shallow code.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a> These are the parts of the work that would be most time-consuming to me, and it handled them with relative ease. This is even though Cursor is still early as a product: there are lots of bugs, it makes plenty of mistakes, and doesn&#8217;t even leverage standard programming tools like linting or typing by default. There&#8217;s a lot of room for it to get a lot better very quickly.</p><p>Funnily enough, many programmers <em><a href="https://x.com/tsoding/status/1887750880901963802">don&#8217;t get this</a></em>. In what seems to be a head-in-the-sand-defense, many are looking at Cursor as it is today and saying &#8220;well, that&#8217;s not as good as me!&#8221; unwilling to imagine just another three years of improvements. I&#8217;m hearing things like &#8220;well, you have to learn how to prompt it&#8230;&#8221; and &#8220;it can&#8217;t handle deep, complex problems&#8221; and &#8220;it can never <em>understand</em> a codebase as deeply as me&#8230;&#8221;</p><p>But this is obvious and willful cope from people who should know better: of course these are solvable problems. Each of these objections will fall as AI-assisted code editors integrate better tools and deploy more powerful, longer-context, models that leverage more test-time compute. This seems inevitable to me. Once more, even smart, technical people do not necessarily have the intuition for exponential growth.</p><h3>Beware Dismissing the Questions</h3><p>Until 2022, a common attitude was that AI could never do <em>creative work</em> &#8212; that creativity was the domain of humans; that it required some special element<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> that a machine could not replicate. This was popular to say and comforting to think, but now we know it also wasn&#8217;t true.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a></p><p>But maybe we always knew it wasn&#8217;t true: if you had really pressed this question ten or twenty years ago, you wouldn&#8217;t have gotten a good answer. People were dismissing these AI questions with lazy, hand-waving answers. The reality they were trying to avoid was very simple: if a human activity really boils down to <em>thinking</em> and AI can <em>think</em>, then that activity can be done by AI. Understandably, people want to believe that what they do is special and not possible to automate. They won&#8217;t believe otherwise until they see a machine do it. We will see this play out many times over the coming decades: people wishfully asserting that something is unique to humans, and then eventually finding that may not be so.</p><p>As we think seriously about the impacts of AI, there are plenty of these dismissive answers that are worth pressing. For example, people say not to be concerned about AI taking over jobs, because we&#8217;ve historically always created more, higher-leverage jobs as we&#8217;ve innovated old ones away. But there&#8217;s just no reason for that to hold <em>ad infinitum</em>. Why couldn&#8217;t AI perform those higher-leverage jobs, too? </p><h3>What Should We Be Thinking About?</h3><p>In the long run, AGI will transform our world, just like the industrial revolution did long ago. I think this can go very well for us, and the right attitude is cautious optimism. But no transformation of this size is without friction. While it&#8217;s not worthwhile to try to predict second- or third-order consequences (there are just too many variables at play), there are a few things that are straightforwardly predictable:</p><ol><li><p>Many jobs will cease to exist, and be completed by AI instead. </p><ol><li><p>Even if some of these jobs are replaced by new jobs, the cycle of <em>job replacement</em> still means transient unemployment en masse.</p></li></ol></li><li><p>People will generally have more and more free time.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a></p></li><li><p>On the internet, AI activity will <a href="https://en.wikipedia.org/wiki/Dead_Internet_theory">vastly outnumber</a> human activity. </p><ol><li><p>If we believe that the distinction between human and machine matters, then working on proof-of-humanity seems important.</p></li></ol></li></ol><p>I&#8217;m not an AI Safety guy. I think AI will be fine. But I think that there is some danger of conflict between humans if we mess up the transition. For example, it&#8217;s easy to imagine a case where a rapid loss of jobs turns into social unrest, which spirals into something really ugly. It wouldn&#8217;t be the first time that people have felt <a href="https://en.wikipedia.org/wiki/Luddite">threatened by automation</a>, or that economic and social volatility is exploited by demagogues.</p><h3>What Should Policymakers Think About?</h3><p>First, please note that public policy moves on slow timelines. Change takes years to enact. If we&#8217;re taking seriously the idea of AGI by 2030, to a policymaker that&#8217;s basically tomorrow. We need to get started. To me, there are several tasks ahead:</p><ol><li><p>The impacts of AI are mostly absent from contemporary policy discussions. This needs to change as quickly as possible.</p><ol><li><p>For a start, most policy issues will be affected in some way by AI. Simply asking &#8220;how will AI affect this topic?&#8221; will be a good way to start introducing the discussion at all levels, while also preparing for upcoming changes. If you&#8217;re debating policy X, it&#8217;s always worth reframing: <em>what does X look like in a world where we have intelligence too cheap to meter?</em></p></li></ol></li><li><p>Secondly, we need to begin discussing the impacts of AGI by themselves, not just as a framing for contemporary issues. </p><ol><li><p>We need to solve for the <em>economic challenges</em> of the transition to AGI. This mostly means preparing for rapid turnover within the labor market, and (in my opinion) for a rapidly shrinking labor force participation rate.</p></li><li><p>We need to solve for the <em>social challenges </em>of the the transition to AGI. On the one hand, there will be some adversarial behavior to remedy. (Think AI-enabled spam and scams.) On the other hand, people will need to feel security, significance, and connection in a world where their labor or intelligence is suddenly no longer so special.</p></li></ol></li><li><p>We need to internalize that these questions are way more important than almost any other contemporary public policy debate. This is where we need to spend our time and effort. Topics like immigration, climate change, income taxes, or whatever else, do <em>matter</em>, but not nearly as much as the big change coming up. And all these topics will look different in a world with AGI, anyway.</p></li><li><p>Finally, we need to think deeply and carefully about what&#8217;s going to happen. Given the difficulty of predicting the future: if in doubt, under-regulate rather than over-regulate.</p></li></ol><p>I don&#8217;t have all the answers. My purpose with this essay is to get people thinking about these questions and appreciating their urgency. While there are some clear first-order action items, like needing to provide additional dignified ways for people to off-ramp from economic production,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> I remain suspect of easy answers that dismiss the questions: you can&#8217;t just say &#8220;UBI&#8221; and call it a day. You might be right in the long run, but it would gloss over the very real near-term challenges of getting there. </p><p>These are big topics &#8212; almost overwhelming in scope. Getting started won&#8217;t be easy. And stated in the abstract, they might not feel so urgent. But we live in a world that is built around the unique value of human intelligence and labor everywhere you look: from the thirty-year mortgage to people&#8217;s identities being wrapped up in their vocations, there are meaningful changes incoming for fundamental aspects of our society. They are manageable changes, of course, but they deserve careful preparation. I can&#8217;t think of anything more important than getting the AI transition right.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>Appendix: Clearly We Have Not Hit The Wall</h3><p>A few months ago, there was a moment of doubt about whether we had hit &#8220;the wall&#8221; in improving AI models: some people speculated that the <a href="https://www.oneusefulthing.org/p/scaling-the-state-of-play-in-ai">scaling laws</a> might not continue to hold, and that we might be in for an era of diminishing returns. </p><p>Right now, this doubt seems to be refuted. Progress is as fast as ever, though big gains seem to be coming from areas outside pure scaling. It&#8217;s worth revisiting Leopold Aschenbrenner&#8217;s <a href="https://situational-awareness.ai/wp-content/uploads/2024/06/situationalawareness.pdf">Situational Awareness</a>: while most of his text focuses on scaling laws, he points out that several orders of magnitude could be gained just from &#8220;unhobbling&#8221; the models by algorithmic improvements, and moving more resources to test-time compute. He cautiously suggests that it might be possible to reach AGI just via these two. My hunch is that this is about right. Andrej Karpathy <a href="https://youtu.be/hM_h0UA7upI?si=Jo9Nqc7Vlcm_sI_K&amp;t=1611">once said</a> that we might observe intelligence in a &lt;1B parameter model, and while it&#8217;s still early, <a href="https://x.com/saranormous/status/1883245259569889343">practical discoveries</a> are pointing this way, too.</p><p>Some people are (maybe wishfully) assuming that the big burst of progress is done and will level off from here. I don&#8217;t think that&#8217;s true. At least right now, it looks like the exponential growth is uninterrupted, and there is no reason to expect it to slow down.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There&#8217;s a temptation to say that he&#8217;s talking his book, of course. But unlike how I view some other major AI entrepreneurs, I take Amodei quite literally. I think he takes a pretty straightforward academic view and I haven&#8217;t noticed him oversell in any other areas.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For a couple of predictions:</p><ul><li><p>In late 2024, <a href="https://www.dwarkeshpatel.com/p/gwern-branwen">Gwern</a> gave it another 2-3 years.</p></li><li><p>In early 2023, John Carmack <a href="https://dallasinnovates.com/exclusive-qa-john-carmacks-different-path-to-artificial-general-intelligence/">quoted</a> 50% by 2030. I assume he&#8217;s <a href="https://x.com/ID_AA_Carmack/status/1806008392416137573">revised down since</a>.</p></li><li><p>Elon Musk has quoted 2029 on <a href="https://x.com/elonmusk/status/1767738797276451090">several</a> <a href="https://x.com/elonmusk/status/1531328534169493506">occasions</a>.</p></li><li><p>Sam Altman has suggested we could have AGI in the 2020s on several occasions.</p></li><li><p>Shane Legg <a href="https://time.com/6556168/when-ai-outsmart-humans">quoted</a> 50% by 2028.</p></li><li><p>Demis Hassabis <a href="https://www.youtube.com/watch?v=pZybROKrj2Q&amp;t=1s">has said</a> we&#8217;re &#8220;on track&#8221; for AGI by 2030.</p></li><li><p>Vinod Khosla <a href="https://www.wsj.com/articles/theres-a-better-way-to-predict-a-technologys-future-follow-the-rate-of-change-15199889">suggested</a> a timeline of 2030.</p></li><li><p>Jensen Huang <a href="https://www.reuters.com/technology/nvidia-ceo-says-ai-could-pass-human-tests-five-years-2024-03-01/?utm_source=chatgpt.com">suggested</a> we&#8217;d have it by 2029, depending on definitions.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Not to mention the considerable lobbying efforts by Sam Altman and others! It&#8217;s remarkable to me that <a href="https://www.politico.eu/article/open-ai-chatgpt-sam-altman-kicks-off-eu-charm-offensive-artifical-intelligence/">he&#8217;s met with basically every EU policy leader</a>, not to mention their equivalents in the US, but none of them have come forward with any discussion about how to manage economic and societal change that is to follow.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>There&#8217;s plenty of discussion about AI as a national security interest in the US, and from a <a href="https://loeber.substack.com/p/14-why-europe-fails-to-create-wealth">misguided</a> regulatory perspective in the EU. But the domestic economic/social policy aspect is totally missing. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>I remember it vividly because I was there, and it was very important to me. I started taking AI/ML classes in the Spring of 2013, and that became the core focus of my undergraduate studies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>In my opinion, we&#8217;ve pretty well already passed the Turing Test, and the &#8220;80%&#8221; mostly reflects the precise details of the criteria for resolution of Kurzweil&#8217;s bet.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>OpenAI Deep Research has access to in-depth web search, and some of the earlier models don&#8217;t. If they had a search integration, maybe their performance would be better and the trend would look less exponential. I think the right way to view this is <em>either</em> as a step-function-change with Deep Research, <em>or </em>as a less steep exponential if you provided web search to the earlier models, too.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I can think of two factors at play: first, to the extent that progress toward AI is compute-limited, the total amount of available compute seems to be increasing exponentially. (You can think of this as a kind of variant on Moore&#8217;s law.) Second, progress in AI is helpful toward creating more progress in AI. This self-reinforcing dynamic produces exponential growth by definition.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>There are many observations on this topic that fit well. I like one from Bill Gates: <em>We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>&#8220;75-95% of the productivity benefits of new technologies not from initial commercialization, but rather realized over years [decades] of diffuse implementation and incremental improvements / adaptations.&#8221; from <a href="https://x.com/JonathonPSine/status/1887982816501317915">Jonathon Sine</a> quoting James Bessen&#8217;s <a href="https://www.amazon.com/Learning-Doing-Connection-between-Innovation/dp/0300195664?crid=CQ79L4DGZF17&amp;dib=eyJ2IjoiMSJ9.OYLo5-jfDDsdc0g-vf5Ruk4mqY0290HNFJjXofcY5Af_7JQVSMlM0uOo_KUqZ3xPOVqrO6i54QJXG3X9aJxoUqO6d2e9BG0zRjXD36sAYQus0_Av7T992LfGjYyRPXUAjYoDoU-cJEeQbHk99f_wneZVytIwtOFu7gvZ_SZNKYI.x-ieO9BuVjj4yMTmdchbpVY41fvkzAJJQbBXXfRlpUk&amp;dib_tag=se&amp;keywords=learning+by+doing+bessen&amp;qid=1738988413&amp;sprefix=learning+by+doing+bessen%2Caps%2C138&amp;sr=8-1">Learning by Doing</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>The web opened to the public <a href="https://en.wikipedia.org/wiki/History_of_the_Internet#Internet_use_in_wider_society">in 1991</a>, and truly began gaining consumer adoption <a href="https://en.wikipedia.org/wiki/Eternal_September">in 1993</a> as websites became commonplace. While there was initial excitement, true penetration took a long time: even by 2005<a href="https://www.pewresearch.org/internet/2014/02/27/part-1-how-the-internet-has-woven-itself-into-american-life/"> only 66%</a> of Americans had internet access. (Global use lagged even further behind; Americans were early adopters.) I remember the 00&#8217;s as years where the online world was still considered something of a novelty; there was a kind of <em>the-internet-is-not-real-life </em>attitude. Ten years later, the opposite was true: everything&#8217;s online, and in many cases, doing things in the physical world was antiquated.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>ChatGPT was released on November 30, 2022.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Note that the current ChatGPT is <em>way better </em>than when it was first released! It&#8217;s come a long way! But consumers struggle to notice this because it&#8217;s getting better in repeated incremental changes over time. If you hooked up a chat console to the now-deprecated GPT-3 API, you&#8217;d be shocked by how immature it was compared to what we have today.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>What this specifically refers to is Lee Sedol, one of the strongest Go players in the world, <a href="https://en.wikipedia.org/wiki/Lee_Sedol#Match_against_AlphaGo">losing to Google&#8217;s AlphaGo in 2016</a>. When he recognized that the AI was vastly stronger than him &#8212; and could never be beaten as it would only improve further &#8212; he retired from the sport.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Below are some examples for you: a variety of research projects I had it run on topics I was curious about.</p><ul><li><p><a href="https://chatgpt.com/share/67a31d01-b184-8003-ac4d-8d059cddbf25">US Dollar Positioning Under a Potential Trump Administration</a></p></li><li><p><a href="https://chatgpt.com/share/67a31d40-94fc-8003-915b-ac5bf7275f30">ServiceNow: Business Model, Products, Market Position, and Technology</a></p></li><li><p><a href="https://chatgpt.com/share/67a6f4d0-b0dc-8003-8068-532e6d8d56fa">Email Provider Pricing Comparison (Transactional Email Sending)</a></p></li><li><p><a href="https://chatgpt.com/share/67a2923a-e960-8003-8c1e-965b687f0015">Ringo Starr&#8217;s Drumming &#8211; Expert Evaluations and Legacy</a></p></li><li><p><a href="https://chatgpt.com/share/67a6ef0c-eec0-8003-91f9-ada440b8deb8">Cultural Formation of Germany: Prussia and the Smaller States</a></p></li><li><p><a href="https://chatgpt.com/share/67a6eedd-463c-8003-97cd-32ad3dcdb38c">Prussia and the Baltic German Communities (18th&#8211;19th Centuries)</a></p></li><li><p><a href="https://chatgpt.com/share/67a8087d-abcc-8003-9eff-b2664c15d7f5">Wittgenstein&#8217;s Philosophy of Mathematics and Finitism</a></p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>At the margin, this means I am now learning about things that I would never have the time to otherwise. When I was younger, I always dreamed of hiring a full-time research assistant to dig into all my curiosities. It looks like this will no longer be necessary.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>This is not a rhetorical question! Seriously, think about it and try to come up with an answer.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>By &#8220;complex-but-shallow&#8221; I mean logic that is complicated to write, but doesn&#8217;t make a lot of nested calls or carry side-effects that need to be handled in code elsewhere. Frontend applications are full of these things: components that require particular styling and event handlers are great examples. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p>A soul, if you are so inclined!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>You may have already seen this in generations from Midjourney or Pika or ChatGPT. Even if you don&#8217;t <em>like </em>their output, it is undeniably creative. But you can find this creativity in other fields, too. For example, take the <a href="https://arxiv.org/pdf/2502.03544">AlphaGeometry2 paper</a>: &#8220;our geometry experts and IMO medalists consider many AlphaGeometry solutions to exhibit superhuman creativity.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p>Some argue that historically, jobs have always gotten replaced by higher-leverage ones. But this is deceptive. The amount of leisure time that&#8217;s available to people has been steadily rising, and the percentage of the population that performs economically valuable work has been slowly decreasing for decades.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>Many of these off-ramps already exist in subtle or unofficial ways. In the US, twelve million working-age adults receive some form of federal disability benefits, and do not work at all, or only work part-time. In the EU, the anecdotal scheme for young people who are having difficulty finding employment seems to be a prolonged stay in higher education &#8212; second Bachelors&#8217; or Masters&#8217; degrees are becoming commonplace.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#24: Insurance for AI: Easier Said than Done]]></title><description><![CDATA[In the past few months, many friends have pitched or asked me about insuring AI risk.]]></description><link>https://essays.johnloeber.com/p/24-insurance-for-ai-easier-said-than</link><guid isPermaLink="false">https://essays.johnloeber.com/p/24-insurance-for-ai-easier-said-than</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Mon, 04 Nov 2024 22:44:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2775b2ed-3674-49c1-be69-2a6765c28442_1536x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In recent months, many friends have pitched or asked me about insuring AI risk. The idea is usually something like this: businesses want to adopt AI for efficiency, but they&#8217;re nervous about the AI hallucinating and making costly mistakes. Even if they buy all the best software to mitigate such mistakes, the scope of LLM outputs is so large that unpredictable, hugely expensive edge cases always remain. Insurance offers a clean way to transfer that risk. </p><p>You could read that as a bullish thesis for such an AI insurance product: imagine a world of widespread AI adoption, where every AI deployment is underpinned by an insurance policy. Or imagine a world where insurance products act as the critical enabler for widespread AI adoption in the first place.</p><p>But the thesis is not that easy! While I won&#8217;t present a slam-dunk-view either way, I want to discuss some of the nuance and complexities that make this market tricky, and probably smaller than it appears at first glance.</p><h3>Insurance for (Software) Errors</h3><p>In the history of business, AI isn&#8217;t the first thing to make mistakes. Humans have been making mistakes for a long time. For that reason, accountants, lawyers, real estate agents, etc. all carry insurance &#8212; specifically, an <em>Errors &amp; Omissions</em> or <em>Professional Liability</em> policy that covers them if they make a costly mistake on the job and get sued by a client. </p><p>In recent decades, a significant amount of rote human labor has transitioned to being completed by software instead. This software transition was subject to the same concerns as the current AI transition: <em>can you really trust accounting software not to make mistakes? Won&#8217;t there be edge-cases in mortgage underwriting that software might miss, but an experienced underwriter would catch?</em> The proof is in the pudding: the world runs on software now. And similar to Professional Liability, many software companies carry <em>Technology Errors &amp; Omissions</em> insurance, in case their software messes something up and their customer goes after them. </p><p>You would think that the market for such insurance is <em>massive</em>. Software handles every button-press in your car, it manages industrial control systems in factories, it monitors the life-or-death status of patients in hospitals. The stakes are high. And we know most software is broken in the margins: every day I visit websites of big, respected companies, and they&#8217;re full of bugs. </p><p>But most software companies haven&#8217;t even heard of Tech E&amp;O insurance. It&#8217;s considered a specialty product, often included as an add-on to cybersecurity insurance. Because it&#8217;s so niche, it&#8217;s hard to estimate the market size, but that&#8217;s an indicator of just how <em>small</em> it is: accounting for under $5B in global annual premiums seems like a very safe bet to me.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> For comparison, in the US, Workers&#8217; Compensation runs <a href="https://www.swissre.com/reinsurance/insights/state-of-us-workers-compensation.html">around $55-60B</a> a year in premiums, and Personal Auto insurance <a href="https://www.fitchratings.com/research/insurance/us-insurance-personal-auto-recovering-homeowners-volatility-continues-05-06-2024">over $300B</a>. </p><p>This should give you pause. The handing-over of professional duties to software feels riddled with liability, even today. The thesis for Tech E&amp;O would be very similar to the thesis for the AI insurance product we started out with. (Let&#8217;s call it <strong>AI E&amp;O</strong>.) And yet the market for Tech E&amp;O is small, even in the face of software carrying weighty responsibilities in every nook and cranny of our world. </p><h3>AI E&amp;O and Tech E&amp;O</h3><p>Taking this one step further: you could consider AI E&amp;O as a new form of Tech E&amp;O, or &#8212; depending on the details of the contract &#8212; as included by Tech E&amp;O policies. After all, AI software is still software. It may not be quite as deterministic as software before LLMs, but you&#8217;re still trying to insure the same type of risk: software mistakes.</p><p>Then, in what sense does AI E&amp;O expand the Tech E&amp;O market? Before LLMs, software could make devastatingly expensive mistakes. After LLMs, software can still make devastatingly expensive mistakes. The LLM aspect may increase the potential frequency and severity of those mistakes, but you have a needle to thread: if frequency of severe mistakes increases too much, then insurance becomes moot. People are not going to use a software product that breaks all the time, regardless of whether any damages are covered or not. It&#8217;d just be a nuisance.</p><p>This puts insurance entrepreneurs in a structurally tricky position. The Tech E&amp;O market is so small that for a venture-scale thesis, you&#8217;d need to forecast AI E&amp;O increasing the size of the Tech E&amp;O market several-fold, probably 10-20x+. To get there, you&#8217;d have to:</p><ol><li><p>Overcome any structural market issues<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> that may inhibit growth;</p></li><li><p>Bet on severity of claims shooting up, much more so than frequency. AI-enabled software would have to become tremendously more dangerous to deploy, with multi-million-dollar-loss glitches lurking. The risk scenarios you&#8217;d be insuring would be cases like &#8220;I&#8217;m Chevrolet, and my marketing AI promised new trucks to 163 customers&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>  or &#8220;I fired all my accountants, replaced them with ChatGPT, and when I woke up this morning I owed a customer a million dollars.&#8221;</p></li></ol><p>Maybe I&#8217;m being unimaginative, but the maneuvering room to get to widespread AI E&amp;O adoption seems tight. I think the likelier path is that businesses will adopt AI while maintaining some risk-reward equilibrium: steering clear of the use cases with the most severe downside risks, and leaving humans in the loop where appropriate. You may well be right to argue that there is still <em>more risk</em> in the system than before, but I don&#8217;t know if there&#8217;s so much risk that it gives rise to a major new class of insurance product and satisfies a venture-scale thesis.</p><h3>Information Asymmetry</h3><p>An important detail of insurance markets is that <strong>insurance carriers must be better at evaluating the risk than the purchasers</strong>. Otherwise you get <a href="https://en.wikipedia.org/wiki/Adverse_selection">adverse selection</a> problems: consumers who know they are more likely to incur claims purchase insurance, the insurance carriers take losses, and the market eventually collapses.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p>This takes you to a practical concern: how would AI E&amp;O products be underwritten? There would be two parts to it:</p><ol><li><p>The insurer would evaluate the characteristics of the AI company &#8212; industry, size, safety and testing practices, etc., and look at their service agreements with customers to figure out what kind of risk they&#8217;re on the hook for. </p></li><li><p>The insurer would run a large battery of tests against the AI offering of the company, seeing how it holds up under a variety of adversarial scenarios, and what the variability of outputs is.</p></li></ol><p>The first part is a classic point of strength for insurers: given a large portfolio of businesses underwritten, they can figure out how these factors affect pricing. But I expect that for an AI E&amp;O insurance product, it&#8217;s really the second part that determines the risk. Here&#8217;s the problem: <em>why would an insurer be better at testing a company&#8217;s AI outputs than the company itself? </em></p><p>Revisiting our earlier example, the folks at Chevrolet would have a much better understanding of their own business, all the ways in which they could deploy AI, and the most dangerous, error-prone areas, than any insurer looking in from the outside. Specifically, there are two related problems:</p><ul><li><p>As an outsider, it&#8217;s extremely hard to get a full understanding of all the ways in which AI will be deployed, and what risks that implies downstream. Hard to price!</p></li><li><p>There is a massive information asymmetry between companies utilizing/selling AI software, and insurers seeking to insure the consequent risks. Trying to insure AI applications looks like a hotbed of adverse selection.</p></li></ul><h3>Concentration of Risk</h3><p>Another classic detail of insurance markets is that insurers need to diversify the risks that they underwrite: for example, if you provide flood insurance, then you wouldn&#8217;t want to write all your policies in a single town by the river: when one house gets flooded by a storm, chances are that all the houses get flooded, and you go out of business. That&#8217;s <em>concentration of risk</em>, and insurers strive to avoid it.</p><p>The trouble is that the ecosystem of AI software products currently has enormous concentration of risk. There&#8217;s a single-digit number of major LLM providers. AI infrastructure, whether for RAG or data labeling, etc. has a similar concentration of activity, with many small providers and a few major ones. Practically speaking, if you&#8217;re insuring mostly GPT wrappers, and the newest GPT model has some kind of safety regression, then your entire portfolio of policies is in trouble.</p><p>For any insurer, it will be tricky to maintain adequate diversification of the underlying risks. In practice, this means your portfolio might simply be constrained to a small size, as you can never grow such that you&#8217;d be over-exposed to any particular underlying provider.</p><h3>Underwriting for the Year Ahead</h3><p>The final challenge is that insurance policies are usually written for the full year ahead, and AI software is evolving with great speed. In our own AI deployments at <a href="https://limit.com/ai">Limit</a>, we found surprising differences in behavior and quality from different models. It&#8217;s hard to trust software updates from outside vendors to be strict improvements. </p><p>Further, at the speed at which businesses are iterating on their AI software, or deploying it in new contexts, makes the underwriting problem even harder. It&#8217;s tough enough to test the AI software at any one point in time. There&#8217;s no good way to make assumptions about how else it will get used in the next few months, or how well-tested the next software release will be. The remedy for an insurance underwriter will be to prescribe what kinds of updates are in scope for the policy, what level of testing must be done, etc. This helps limit the risk, but it also greatly increases the complexity of the insurance contract, and makes it more cumbersome to purchase.</p><h3>What You Need for AI E&amp;O</h3><p>My skepticism above doesn&#8217;t mean there&#8217;s no case for AI E&amp;O. There certainly is. But it&#8217;s tricky. You&#8217;d have to bring the following conditions together:</p><ol><li><p>There must be rare, hard-to-mitigate, severe risks from AI deployment;</p></li><li><p>The purchasers of such insurance are the actors in the market (such as software providers and consumers) that are <em>stuck</em> with the risk, i.e. not able to contractually transfer it to other parties;</p></li><li><p>The insurers would need to be better than the policyholders at figuring out the riskiness of the AI deployment.</p><ul><li><p>Could AI E&amp;O insurers partner with AI testing/safety/QA service providers, similar to how cyber insurers partner with cybersecurity providers? Yes, but those services are already readily accessible to potential insurance customers on the open market!<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> The information asymmetry remains.</p></li><li><p>An insurer wouldn&#8217;t need to know how to underwrite every such company, but could constrain their appetite to certain types of businesses where they feel they can exhaustively understand the AI risks;</p></li></ul></li><li><p>Diversification of underlying risks (technology vendors) would have to be maintained, which practically implies limiting the portfolio size of the insurer;</p></li><li><p>The insurance policies would need to prescribe guardrails around software updates.</p></li></ol><p>It is certainly possible to bring all these conditions together &#8212; it&#8217;s just not easy, and even when you do, it implies a very selective, small portfolio of underwritten risks. I suspect that at least for the next few years, the set of such opportunities will be pretty thin, making it <em>a way</em> but <em>not the best way</em> to attack the AI liability problem. Furthermore, you would need this risk environment to scale up dramatically to give rise to a venture-scale insurance thesis. For now, if you&#8217;re really good at evaluating AI model safety, that&#8217;s probably better sold as a standalone service than used to underpin an insurance product.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>This piece was inspired by conversations over the past weeks with Rune, Bala, Zack, Alex, and others. Thanks for your thoughts!</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>You might get a figure in that ballpark if you count the premiums of Cyber + Tech E&amp;O policies, but that wouldn&#8217;t be the right thing to do. You&#8217;d need to factor out the costs of the cyber coverage and try to get to the <em>standalone</em> cost of the Tech E&amp;O coverage. On a standalone basis, I&#8217;m almost certain the global volume of Tech E&amp;O is less than $5B in premiums. I wouldn&#8217;t be surprised if it&#8217;s under $2B.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Some potential structural market issues below:</p><ol><li><p>Many software businesses have already contractually transferred their liability, thereby obviating any need for an insurance product. It is common for software products to have draconian terms and conditions that users click &#8220;accept&#8221; to without reading: they provide absolutely no warranty, no refunds, the customer agrees to indemnify the business, not the other way around, and so forth.</p></li><li><p>The expected liability may be overstated. Tech E&amp;O is pretty cheap, usually in the very low thousands of dollars for a million dollars of coverage. Think about what that pricing means: it suggests that it&#8217;s not particularly risky to underwrite, and claims are reasonably infrequent/small. There are many things that seem theoretically very prone to error, but in practice work out pretty well.</p><ol><li><p>The liability being &#8220;overstated&#8221; might reflect it being absorbed elsewhere in the stack! Some events that <em>could</em> be covered by a Tech E&amp;O policy may end up being paid for elsewhere in the stack (e.g. the affected party internalizes the loss instead of going after the software vendor), or covered by a different insurance product, e.g. a property policy in the event of property damage due to faulty industrial software. I don&#8217;t have any evidence for this, it just strikes me as plausible.</p></li></ol></li></ol></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>This example is inspired by <a href="https://venturebeat.com/ai/a-chevy-for-1-car-dealer-chatbots-show-perils-of-ai-for-customer-service/">a true story</a>!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Or re-prices wildly higher, basically passing the cost of adverse selection on to the regular purchasers.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>In fact, the worst of all adverse selection will occur when insuring companies that already run all the maximal AI testing/safety/QA and are still nervous. On the one hand, they might be excellent, cautious, responsible policyholders &#8212; on the other hand, they might know something the insurer doesn&#8217;t!</p></div></div>]]></content:encoded></item><item><title><![CDATA[#23: How to Write a Good Pitch Deck]]></title><description><![CDATA[Over the years, many friends have asked me to review their pitch decks and provide advice before they go out to raise capital.]]></description><link>https://essays.johnloeber.com/p/23-how-to-write-a-good-pitch-deck</link><guid isPermaLink="false">https://essays.johnloeber.com/p/23-how-to-write-a-good-pitch-deck</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 27 Oct 2024 19:28:52 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/508ee59c-7ee7-4be1-b97a-a73388ebb173_3072x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Over the years, many friends have asked me to review their pitch decks and provide advice before they go out to raise capital. I&#8217;ve now been on both sides of the table for long enough &#8212; writing decks as a founder, reading decks as an occasional angel &#8212; to recognize some patterns that work and some that don&#8217;t. There is one piece of advice I received early on in my journey that has stuck with me &#8212; I repeat it often &#8212; that I want to share with you today. While I discuss this from the perspective of founders raising venture capital, it&#8217;s broadly applicable (by analogy) to many other situations where you&#8217;re making a pitch. I&#8217;ll state it up-front, and then we&#8217;ll go into detail:</p><p><strong>A good pitch deck makes it easy for the reader to write an investment memo.</strong></p><p>Classic advice on writing is to <em>remember your audience</em>, to empathize with your reader, and make your writing accessible to them. In this way, the advice to founders writing pitch decks is to <em>simplify, simplify, simplify</em> &#8212; assume the investor doesn&#8217;t have much time to read the deck, focus on the key points, etc. This is all true, but it misses what&#8217;s next in the process. Keep in mind that the investor wants to learn and understand your company, but they also have work to do: a process to follow, and documents to write. You want to make it easy for them to do their work.</p><p>The investor reads your deck, they may take a call with you, and then they may share your deck <em>and their notes</em> with other people on their team. Early on in the process, the notes might be informal, like some bullet points in an email. Later on in the process, they will draft an investment memo to recommend to the rest of their team &#8212; and to memorialize for the investors in the fund &#8212; why to invest in your company. At that point, you will have supplied many other materials that they may use to write the memo, but the deck is the anchor that people will most frequently refer back to.</p><p><strong>When you pitch, you are taking on a process that should result in an investment memo on the other side.</strong> This means that two things are very important:</p><ol><li><p>You should know what an investment memo looks like.</p></li><li><p>You should make it as easy as possible for the investor to draft their memo.</p></li></ol><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://essays.johnloeber.com/subscribe?"><span>Subscribe now</span></a></p><h3>What Does an Investment Memo Look Like?</h3><p>Depending on the firm, sector, and stage, investment memos have different formats and styles. Thankfully, there are enough public examples<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> that you can get a feel for them. Some examples below:</p><ul><li><p><a href="https://x.com/NTmoney/status/1848692953554031107">1Confirmation&#8217;s memo</a> for Bridge.xyz (2022)</p></li><li><p><a href="https://github.com/Datamine/Various/blob/master/YouTube%20Sequoia%20Memo.pdf">Sequoia&#8217;s memo</a> for Youtube (2005)</p></li><li><p><a href="https://www.canaan.com/latest/what-s-the-road-from-12m-to-3b-look-like-here-is-our-investment-memo-from-1999-for-on24">Canaan&#8217;s memo</a> for On24 (1999)</p></li><li><p><a href="https://raw.githubusercontent.com/Datamine/Various/refs/heads/master/snapchat-seed-memo.webp">Lightspeed&#8217;s memo</a> for Snapchat (2012)</p></li><li><p><a href="https://medium.com/redpoint-ventures/the-future-of-non-payroll-spend-our-continued-investment-in-ramp-why-we-significantly-upped-e1db3a15c15d">Redpoint&#8217;s memo</a> for Ramp (2021)</p></li><li><p><a href="https://greylock.com/portfolio-news/congratulations-roblox/">Greylock&#8217;s memo</a> for Roblox (2018)</p></li><li><p>Bessemer has published <a href="https://www.bvp.com/memos">17 memos</a> for investments between 2005 and 2015.</p></li></ul><p>Additionally, some firms write about their processes and how they draft their memos. NextView has a <a href="https://nextview.vc/blog/the-investment-memo/">fine blog post</a> on this, for example. As you dig into these examples, you&#8217;ll notice a few things:</p><ul><li><p>Venture firms tend to examine businesses from a different perspective than how a founder might present them &#8212; paying a lot of attention to areas that a founder might not, and vice-versa.</p></li><li><p>Writing a memo is not easy!</p></li></ul><p>It&#8217;s a good exercise to try yourself. Write a memo for a company that you&#8217;re familiar with. Write a memo for your own company. (One of the underrated benefits of investing in startups is that it gives you more of an investor&#8217;s perspective when writing the pitch for your own company.) Having some empathy for and familiarity with the work that happens on the other side will make your presentation of your company much better.</p><h3>Make it Easy for Them to Write their Memo</h3><p>I usually recommend the following process. First, put together your pitch deck organically &#8212; there are many good templates and structures you can adopt &#8212; and present your business however you feel is right. Then examine your deck from the investment memo perspective:</p><ul><li><p>For every slide: are you presenting information in the way that you&#8217;d expect it to be reflected in the memo? What would the memo version of the same content say? If there&#8217;s a big difference, consider presenting the information another way.</p></li><li><p>For every piece of information: does it get in the way of the reader as they&#8217;re doing their work? Does it distract? If it&#8217;s certainly not going to make it into the memo &#8212; maybe it&#8217;s a detail, but still important &#8212; consider moving it to an appendix or a supplementary document.</p></li><li><p>Are you missing any information that the investor would normally mention in a  memo to their colleagues? Include it. </p></li></ul><p>Applying this framing should help take you out of the bubble of your own perspective, and clarify your work. In practical terms, this normally simplifies your pitch, moves content from abstract to concrete, and helps reduce downstream games of telephone as investors try to interpret your work.</p><h3>Tactics and Pitfalls</h3><p><strong>Expectations Cascade<br></strong>Sometimes it&#8217;s temping for a founder to skip a slide that an investor might expect, like on TAM, revenue projections, or competition. The founder may have a rational reason for doing so, and write it off as an unnecessary hoop to jump. But the investor knows that their colleagues will want to know those things as they try to understand the business.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> If you don&#8217;t provide the information, the investor will have to try to put it together on their own. Usually, you&#8217;re better off presenting that information up-front and guiding their research.</p><p><strong>Answer the Simple Questions<br></strong>Try to anticipate the questions that you&#8217;re going to get in a slightly adversarial reading of your deck. For example, if you only have a few months of revenue progress and some LOIs, reading that can be confusing for outsiders. I have often had to ask &#8220;okay, who are your top five customers, how much are you they paying you this year, and what do you think they will pay you next year?&#8221; as I try to wrap my head around what&#8217;s going on inside a company. A simple table in the appendix can do wonders. </p><p>From another angle: imagine an investor casually bringing up your pitch to their colleagues internally for a gut-check, and answering their basic questions like &#8220;what does the company do?&#8221; and &#8220;why is this interesting?&#8221; Your deck should equip the investor with all the key talking points and make it super easy for them to advocate on your behalf, before having spent many hours on a deep-dive.</p><p><strong>Use a Supplementary Memo<br></strong>One of the main challenges in writing a deck is figuring out what&#8217;s superfluous and can be struck. Everything feels important! To sidestep this, some founders like to keep their pitch decks brief, and write a comprehensive document as a supplementary memo. Personally, I like that format. It lets you keep the deck as an overview, and gives you a place to put all the details. That document should provide to the investor everything they need to understand your business, and to write their investment memo. (Of course, they will consider many other materials as they do their diligence, but the materials that you provide will always function as the entrypoint.)</p><p><strong>Provide Good Resources</strong><br>As investors dig in on your business, they will want to consult lots of external resources. They will start from your deck (and/or supplementary memo): they&#8217;ll look up and double-check the industry statistics you cite, research the competitors you mention, etc. You can make it easier for them by using footnotes in your deck to link to resources they can read. Ideally, a resource that you link to doesn&#8217;t just back you up on one particular fact, but generally gives the investor high-quality information that feeds well into the case for your business.</p><p><strong>Testimonials<br></strong>Founders sometimes play coy with the identities of their customers. They might write things like &#8220;<em>Amazing product, I&#8217;d pay lots of money for this&#8221; - Fortune 500 COO </em>in their deck and then voice-over the identity of the person in a live pitch, or provide logos of big customers, but not name who specifically at the firm bought the product. There&#8217;s nothing quite as useful for an investor as a discussion with your customers &#8212; make it easy! If you&#8217;re confident that your customers will provide good testimonials, and they&#8217;re willing to act as references, then always provide their names.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> </p><p><strong>Competitors and Comparables<br></strong>Founders often focus on competition in their slide decks, trying to point out why what they&#8217;re doing is distinct and special. But competition isn&#8217;t important in a deck just as a threat to the business, but also as a reference point: for example, your legacy competitors in the public markets are useful to look at, because they show how such businesses are valued at scale, how their market is evolving, whether they might be good acquirers of your company one day, etc. Understanding how to value your business can be nontrivial, and clear enumeration of the comps will, just like everything else in this essay, save the investor time as they dig in on your business.</p><h3>Conclusion</h3><p>If you want to work productively with other people, then a good principle is to ask <em>how can I unblock them? </em>and <em>how can I make it easy for them to do their work?</em> Raising capital is no exception. Appreciate that people on the other side have a process they must run, think about the boxes they have to check, and make that easy for them. By contrast, going with unorthodox presentation/data formats, insisting on <em>doing the process your way, </em>or pitching your company such that it raises many questions that are hard to answer, are good ways to get shuffled to the bottom of the priority list. The best thing that you can do for busy people is to help them be efficient with their time.</p><p>If you&#8217;re thinking about putting together a deck for something: good luck! I hope these notes are helpful. If you ever want a second pair of eyes, I&#8217;m happy to help. You can reach me at contact@johnloeber.com.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>Thanks to Mike and Evan for their feedback and comments on this piece.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Internal investment memos are very different from the public-facing &#8220;why we invested&#8221; blog posts that many firms tend to write. Some firms will voluntarily publish their memos, but note that these tend to be redacted/modified versions. For example, you may notice that in many of the memos I&#8217;ve linked to, details around round pricing and structure tend to be light.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>And in turn, they may be expected by their LPs to present this information, etc. Expectations for information cascade all the way throughout the stack, and even if you&#8217;re able to persuade someone at one level of the stack that the information doesn&#8217;t matter, the next person one level up in the stack will be puzzled why it&#8217;s missing. That&#8217;s not good!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>You could even put a slide of customer reference contact details to call/email in an appendix.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#22: A Bubble is Rarely A Bubble]]></title><description><![CDATA[It&#8217;s a frothy time in AI.]]></description><link>https://essays.johnloeber.com/p/22-a-bubble-is-rarely-a-bubble</link><guid isPermaLink="false">https://essays.johnloeber.com/p/22-a-bubble-is-rarely-a-bubble</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sat, 05 Oct 2024 23:37:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4a6221db-71b7-405c-98d5-e60e072c2034_1536x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s a frothy time in AI. OpenAI, Anthropic, and xAI are raising some of the largest venture rounds ever. The appetite for AI businesses far outstrips anything else in the technology sector. </p><p>But these companies are <a href="https://www.tanayj.com/p/openai-and-anthropic-revenue-breakdown">devouring capital with no profitability in sight</a>, and are caught in a <a href="https://www.sarahtavel.com/p/the-big-stack-game-of-llm-poker">game of margin-eroding lockstep competition</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Most importantly, the limited revenue being created <em>downstream</em> of these businesses calls into question whether these businesses may grow into their valuations: David Cahn called this <a href="https://www.sequoiacap.com/article/follow-the-gpus-perspective/">AI&#8217;s $200B</a>, and later, <a href="https://www.sequoiacap.com/article/ais-600b-question/">$600B Question</a>.</p><p>Taking this logic a step further, some people are calling the current level of investment in AI a &#8220;bubble&#8221;. I am now old enough to have heard many things referred to as &#8220;bubbles&#8221; in my lifetime, and have generally found this framing to be misleading. As with many cynical views, talk of bubbles is a good way to sound clever but miss the value. In this essay, I will:</p><ol><li><p>Debunk some major historical bubbles;</p></li><li><p>Make the case that the only true, permanently-collapsing bubbles are <em>frauds</em>;</p></li><li><p>Suggest how things will play out in AI: we are still very early. </p></li><li><p>By analyzing Dot-Com capital investment trends as an analogy, make the case that we should expect <em>much</em> more capital to flow into AI.</p></li><li><p>Discuss how and why we should expect investment activity in AI to shift away from traditional venture capital and into the public markets.</p></li></ol><h3>2000: The Dot-Com Bubble</h3><p>From about 1995 to 2000, investors were hugely excited about telecom, computing, and internet businesses, with early-stage startups routinely going public and getting enormous valuation multiples. It was an incredible time to be a technologist &#8212; <a href="https://en.wikipedia.org/wiki/Dot-com_bubble">until it all collapsed</a>, and countless paper millionaires were wiped out entirely.</p><p>But how irrational was all this exuberance? The largest companies of today &#8212; Microsoft, Google, Amazon, Apple &#8212; were all significant players in the <a href="https://en.wikipedia.org/wiki/Dot-com_bubble">Dot-Com era</a>. Companies like Pets.com and Webvan did go out of business, but were reincarnated a decade later as firms like Rover and Instacart, which ended up being successful in their own rights. Marc Andreessen and Bret Taylor make good cases<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> that there was no categorical <em>tech bubble</em>, but rather a market distortion due to the <a href="https://en.wikipedia.org/wiki/MCI_Inc.">Worldcom</a> and <a href="https://en.wikipedia.org/wiki/Enron_scandal">Enron Frauds</a>, which caused collateral damage as they collapsed.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0N8N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0N8N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 424w, https://substackcdn.com/image/fetch/$s_!0N8N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 848w, https://substackcdn.com/image/fetch/$s_!0N8N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 1272w, https://substackcdn.com/image/fetch/$s_!0N8N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0N8N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png" width="1196" height="654" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:654,&quot;width&quot;:1196,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:103763,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0N8N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 424w, https://substackcdn.com/image/fetch/$s_!0N8N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 848w, https://substackcdn.com/image/fetch/$s_!0N8N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 1272w, https://substackcdn.com/image/fetch/$s_!0N8N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe5623b7c-9091-4c18-9d50-8799bf27168d_1196x654.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The dot-com bubble was the little blip on the left of this chart. Even if you had <em>bought the top</em> on March 30, 2000 and sat through a catastrophic drawdown &#8212; over the next two years you&#8217;d face 80% losses in most of your positions &#8212; you would&#8217;ve done extremely well in the end. Between the &#8220;top&#8221; in March 2000 and today, your Amazon position would return 203x, your Apple position 66x, etc.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Even if your portfolio was mostly losers that went to zero, you would&#8217;ve covered your losses and then profited many, many times over.</p><p><em>All you had to do was nothing.</em> But to dispassionately wait it out is no easy feat. You had to be a technology investor with a well-diversified portfolio and true long-term conviction. When most people take an 80% loss and all the pundits are sneering at them like &#8220;obviously a 200 P/E ratio was unsustainable&#8221;, their natural reaction is to sell and wash their hands of it, not to hold for another 20 years. But holding would&#8217;ve been right, in retrospect in a very predictable way: of course technology is the future, and some of the giants of the future are born today.</p><h3>2008: Housing Bubble</h3><p>In the lead-up to the <a href="https://en.wikipedia.org/wiki/2007%E2%80%932008_financial_crisis">2008 financial crisis</a>, prices of housing in the US exploded for a variety of reasons, one of which was the easy access to (subprime) credit.  Contemporary commentators spoke of a &#8220;housing bubble&#8221;, as they still do today. </p><p>In March 2020, Jesse Colombo pulled together some statistics to <a href="https://www.forbes.com/sites/jessecolombo/2020/03/31/why-us-housing-bubble-20-is-about-to-burst/">argue</a> both that 2000-2007 constituted a fiscally-driven housing bubble, and that the US was due for a 2020 correction. His chart below:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uur2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uur2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 424w, https://substackcdn.com/image/fetch/$s_!uur2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 848w, https://substackcdn.com/image/fetch/$s_!uur2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 1272w, https://substackcdn.com/image/fetch/$s_!uur2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uur2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png" width="958" height="602" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:602,&quot;width&quot;:958,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Housing vs. CPI&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Housing vs. CPI" title="Housing vs. CPI" srcset="https://substackcdn.com/image/fetch/$s_!uur2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 424w, https://substackcdn.com/image/fetch/$s_!uur2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 848w, https://substackcdn.com/image/fetch/$s_!uur2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 1272w, https://substackcdn.com/image/fetch/$s_!uur2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb53cfeb2-0da1-4d2e-b2fb-1932a4ebe2af_958x602.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The timing of this argument could not possibly have been worse! Within a few weeks,  the Fed would kick off the mother of all growth cycles, and housing prices would boom even faster than before. When you zoom out, talk of a &#8220;bubble&#8221; becomes unpersuasive: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!wj6X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!wj6X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wj6X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wj6X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wj6X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!wj6X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg" width="1456" height="1028" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1028,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Average Home Prices (2024)&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Average Home Prices (2024)" title="Average Home Prices (2024)" srcset="https://substackcdn.com/image/fetch/$s_!wj6X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 424w, https://substackcdn.com/image/fetch/$s_!wj6X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 848w, https://substackcdn.com/image/fetch/$s_!wj6X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!wj6X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3baff3d0-5fd5-4119-92b1-1ec4b309625c_1558x1100.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>To me, this looks pretty steady. The &#8216;08 recession barely shows up &#8212; a short-lived drawdown in an exponential trend. All you had to do was wait through it. In retrospect, this trend seems very predictable: without profound regulatory changes that you&#8217;d be able to see coming from years away, the price of housing in the US will continue to grow just as it has for decades.</p><h3>2013, 2017, 2021: Crypto</h3><p>In late 2013 and early 2014, a flurry of trading activity on <a href="https://en.wikipedia.org/wiki/Mt._Gox">MtGox</a> &#8212; the main cryptocurrency exchange of the time &#8212; briefly took the price of Bitcoin past $1,100. At the time, it seemed extraordinary. $1000 per coin? <em>A ten billion dollar market cap?</em> Unbelievable amounts of money. Months later, MtGox collapsed, and so did the price of Bitcoin with it. Bitcoiners soon found out that the price had been <a href="https://willyreport.wordpress.com/2014/05/25/the-willy-report-proof-of-massive-fraudulent-trading-activity-at-mt-gox-and-how-it-has-affected-the-price-of-bitcoin/">fraudulently pumped on MtGox by a trading bot that researchers named Willy</a>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> The price of Bitcoin fell by over 80% at its lowest, and took three-and-a-half years to reach back to $1,100. Countless early Bitcoiners lost faith during this time and thought it was over. Today, nobody remembers any of this.</p><p>On January 13, 2018, the New York Times published a memorable article: <a href="https://www.nytimes.com/2018/01/13/style/bitcoin-millionaires.html">Everyone&#8217;s Getting Hilariously Rich, and You&#8217;re Not</a>. It made fun of the wild exuberance in cryptocurrencies, with Bitcoin trading as high as $19,300 and Ethereum at $1,100. It was the first time that crypto had really hit the mainstream. People at the time thought this was a crazy bubble; I remember taxi drivers telling me what ICOs to buy.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> The market collapsed shortly thereafter, and Bitcoin would once more come down to around $3,000 in the intervening years.</p><p>Then, in 2021, following the massive COVID-era financial stimulus, crypto had another incredible run, lifting Bitcoin well past $60,000, and countless smaller cryptocurrencies along with it. This cycle too wound up with truly spectacular financial collapses, with billions of dollars incinerated at shops like <a href="https://en.wikipedia.org/wiki/Terra_(blockchain)#Collapse">Terra/Luna</a>, <a href="https://en.wikipedia.org/wiki/Celsius_Network">Celsius</a>, <a href="https://en.wikipedia.org/wiki/FTX">FTX</a>, <a href="https://en.wikipedia.org/wiki/Three_Arrows_Capital">Three Arrows Capital</a>, etc. Bitcoin would collapse down to $15,000 in the aftermath.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4FP3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4FP3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 424w, https://substackcdn.com/image/fetch/$s_!4FP3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 848w, https://substackcdn.com/image/fetch/$s_!4FP3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 1272w, https://substackcdn.com/image/fetch/$s_!4FP3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4FP3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png" width="924" height="386" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59043f12-5326-481e-bc53-150034395f04_924x386.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:386,&quot;width&quot;:924,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:48444,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4FP3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 424w, https://substackcdn.com/image/fetch/$s_!4FP3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 848w, https://substackcdn.com/image/fetch/$s_!4FP3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 1272w, https://substackcdn.com/image/fetch/$s_!4FP3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59043f12-5326-481e-bc53-150034395f04_924x386.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It is now October 2024, and Bitcoin is once more trading at $60,000, with not a peep in the popular press. The 2013 and 2017 episodes no longer look like bubbles, but mere blips on a chart. Even 2021 no longer looks like a bubble, insofar as the $60,000 price ballpark appears to be well-supported today. Similar as to the Dot-Com bubble, many positions will pan out as losses, but long-term investors with carefully diversified<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> portfolios have done well.</p><h3>1630: Tulips</h3><p>Both crypto and the Dot-Com era have been compared to the Dutch <a href="https://en.wikipedia.org/wiki/Tulip_mania">Tulip Mania</a> of the 1630s, since that&#8217;s the one crazy bubble story that everyone knows. There are only two problems with it:</p><ol><li><p>That story is 400 years old;</p></li><li><p><a href="https://www.ft.com/content/8ad786dc-fe95-11db-bdc7-000b5df10621">It&#8217;s mostly made up.</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p></li></ol><p>I&#8217;ve always been surprised that the Tulip story has had so much staying power: you&#8217;d think that a hundreds-of-years-old, niche-market story would not be highly valued as an analogy for reasoning about massive modern markets, even before historians figured out it was mostly fiction. But it&#8217;s been a very sticky &#8212; and damaging &#8212; meme. Mass enthusiasm for any kind of new technological paradigm is often metered by skepticism, and invoking <em>Tulip Mania</em> is always an easy dismissive reach. </p><h3>The Role of Fraud</h3><p>The most disastrous of all historical market bubbles was the <a href="https://en.wikipedia.org/wiki/South_Sea_Company">South Sea bubble</a> of the 1710s. But it was not just driven by speculation, but rather by massive fraud: prominent politicians were paid to pump the stock, and financial games like large-scale debt-for-equity swaps created an illusion of value. </p><p>Indeed, in virtually all other famous bubbles, fraud played a meaningful part: Enron and Worldcom set the pace for the Dot-Com bubble. FTX and others misappropriated funds to pump the prices of cryptoassets in 2021, creating fraudulent demand. The financial crisis of 2008 followed widespread mortgage and ratings fraud. </p><p>This creates some interesting framing. Frauds can create bubbles of their own, and draw a lot of speculative excess,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> but eventually collapse permanently. And it is not uncommon for big waves of technological/financial opportunity to get mixed up with some level of fraud; hot markets tends to attract grifters. However, after these frauds collapse &#8212; temporarily dragging down the market with them  &#8212; the fundamental value remains. Investors in technology post-Enron, crypto post-FTX, US real estate post-2008, all have done very well.</p><h3>Takeaways</h3><p>Once you discount the singular story of Tulip Mania, and try to account for the impact of fraud, you are left with market histories that don&#8217;t look so crazy. It&#8217;s hard to find examples of irrational, speculative excess at scale. The popular narrative of a market bubble as a <a href="https://en.wikipedia.org/wiki/Extraordinary_Popular_Delusions_and_the_Madness_of_Crowds">madness of the crowd</a>, with people deluding themselves into losing their last shirt betting on magic beans &#8212; is, like a lot of other popular psychology, much more of an attractive <em>story</em> than grounded in fact. I think this hurts the people who believe this framing, as it excludes them from participating in episodes of great long-term wealth creation. A &#8220;bubble&#8221; is rarely a bubble.</p><p>Of course, the market can be short-term irrational, and there are fads that blow up  and then collapse permanently, like the <a href="https://www.vanityfair.com/hollywood/2023/07/the-beanie-bubble-burst-inside-billionaire-ty-warners-furry-empire">Beanie Babies craze</a>. Those things happen all the time.</p><p>But the difference is in scale. Beanie Babies were a retail craze, not a multibillion dollar market. When there are billions of dollars of (sophisticated) capital moving into a space, there has usually been something to it. If you invest in groundbreaking technological innovation, avoid frauds, and you&#8217;re patient to hold for the long term, that tends to do well. Disruptive value creation tends to come with some volatility, and the paper-handed get shaken out over a few boom-bust cycles. The returns are captured by those who hold on and double down.</p><p>Looking back at the Dot-Com crash, or even the Railway or Automobile investment manias<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> of the late 1800s and early 1900s: temporary market crashes tend to flush out many of the competing firms, but some of them survive and go on to capture the market&#8217;s long-term value. The tricky question for investors, and the reason to be well-diversified, is that it is very hard to predict ahead of any crash which firms will be the long-term winners.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><h3>AI</h3><p>By our framing of prior bubbles, the most important question is whether there is fraud. I don&#8217;t think there is. There&#8217;s no FTX injecting fake liquidity. There&#8217;s no Enron luring people with blockbuster returns. In fact, reported revenues are still quite low. </p><p>In that sense, I don&#8217;t think there&#8217;s a &#8220;bubble&#8221; in AI. In fact, I think we are still <em>extremely early</em>. Right now, there are still many questions about deployment, monetization, margins, commoditization, and where long-term value accrues. As those questions get answered and revenues from other parts of the economy start substantially transitioning into AI, mainstreet investors will begin to internalize just how much change is coming down the pike. Then you&#8217;ll see a much larger scale of capital inflow.</p><h3>AI in the Public Markets</h3><p>When I mention a much larger scale of capital inflow, what do I mean? Isn&#8217;t $6.6B already the largest venture round of all time? Wasn&#8217;t venture investment even at the peak of the Dot-Com era just $35B a year?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p><p>People forget that private markets are much smaller than public markets, and equity markets are much<em> </em>smaller than debt markets. So far, all investment activity has been in private, equity markets. </p><p>During the Dot-Com era, telecom companies were spending <a href="https://www.richmondfed.org/-/media/richmondfedorg/publications/research/economic_quarterly/2003/fall/pdf/wolman.pdf">$135B a year</a> &#8212; $250B in 2024 dollars &#8212; on infrastructure. In aggregate, they spent around <a href="https://www.fabricatedknowledge.com/p/lessons-from-history-the-rise-and">$700B that decade</a>. These were incredible levels of raw capital expenditure on the belief that there were network effects to owning internet infrastructure. To support this extraordinary level of spend, they raised over $2T in the public markets,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> at least $600B ($1.1T in 2024) of which was in bonds (debt).</p><p>That debt came to be the big problem. Virtually all of these telecom firms were massively overlevered.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> You know how that story ended. </p><p>But the Dot-Com telecom story makes it clear that there&#8217;s a much larger scale of capital available. Just as for the advent of the internet, AI offers a globally transformational opportunity. There are some lessons to be heeded with respect to debt and leverage, but modern technology giants are much smarter about this.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> When Sam Altman said he wanted to raise 7 Trillion Dollars, the telecom era puts it into historical perspective: do I believe that the opportunity in AI is seven times larger than the opportunity in the internet? I&#8217;ll leave that question to you.</p><p>In the path of this opportunity, the capital demands are unending. Larry Ellison thinks that <a href="https://www.barrons.com/articles/oracle-stock-ai-larry-ellison-43c1ecd9">competing in LLMs will soon cost over $100B</a> to enter, and he&#8217;s right. This means that some of these private firms may need to soon transition into the public markets to fundraise (and maybe even into the debt markets for those confident in their revenues and margins), just as a way of accessing sufficient capital at scale. In tech, we haven&#8217;t seen that kind of dynamic in a long time.</p><p>I think it would be interesting and good if that were to happen. Investment capital into AI would scale up significantly, but there&#8217;s also a pro-social effect. Bear in mind that the bulk of <em>innovation</em> is currently in hard-to-access privately held companies. There are public companies like Google, Microsoft, and Nvidia for folks to bet on, but the vast majority of investors will not have any exposure to firms like OpenAI or Anthropic for the foreseeable future. These technologies will meaningfully reshape the economy, and the perception of all the economic gains being captured by a few private companies would draw significant political heat. This transitional phase will go much smoother if mainstreet investors can have, for example, at least a little bit of exposure through their 401Ks. Regulators would be smart to help make the public markets more accessible to &#8220;democratize&#8221; this big shift.</p><h3>Conclusion</h3><p>It&#8217;s going to take some time for AI to percolate and revenues to transition from other parts of the economy. Between then and now, I&#8217;m sure some investors will lose patience, many firms will be outcompeted, and unexpected innovations will hit many times over. The market will have some volatility. </p><p>But overall, it is clear that, just as in the Dot-Com cycle, we are witnessing the emergence of technologies that will become the dominant forces in our economies in the coming decades. Long-term, the biggest players in AI will be the biggest companies on the planet. If I&#8217;m making sector-level bets today, I&#8217;m not going to be worried by any 2025 drawdown, for example. As usual, it&#8217;s hard to predict <em>which firms</em> specifically will come out on top a decade from now after some boom-bust cycles, and where in the stack they will play, but I think it&#8217;s safe to expect that well-diversified investors<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a> in AI will do very well long-term.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>I&#8217;d call this a <a href="https://fs.blog/the-red-queen-effect/">Red Queen Race</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>See:</p><ul><li><p>Marc: <a href="https://x.com/pmarca/status/1840165701849616579">Tweet</a></p></li><li><p>Marc: <a href="https://x.com/pmarca/status/1840310825644691823">Tweet</a></p></li><li><p>Tren Griffin&#8217;s <a href="https://25iq.com/2017/11/11/the-1990s-telecom-bubble-what-can-we-learn/amp/">blog post</a> on the &#8220;Telecom Bubble&#8221;</p></li><li><p>Bret: <a href="https://open.spotify.com/episode/6YEHFTNK2HNa62HBGDqNQZ">Podcast</a></p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I know that the Dot-Com crash happened <em>before</em> the frauds collapsed, but my point is that if not for those frauds pumping up the market in the first place, the Dot-Com era would&#8217;ve probably played out a little differently.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>And you would&#8217;ve done much better if you hadn&#8217;t bought the literal top. Buying in 1996 would&#8217;ve scaled up your returns by another order of magnitude.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>The story is incredible and I recommend reading it, if you&#8217;re into that kind of thing. They published more on Willy later, under the <a href="https://blog.wizsec.jp/2015/02/mtgox-investigation-release.html">WizSec Security Research</a> blog.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>I remember many people at the time riffing on Joe Kennedy Sr., something along the lines of: when your shoe-shine boy is telling you which stocks to buy, it&#8217;s time to exit the market.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Because of the liquidity network effects inherent in cryptocurrencies, the emphasis here is much more on &#8220;carefully&#8221; than on &#8220;diversified&#8221;. So far, this has been the rare sector where you don&#8217;t want a market index or portfolio of many different assets.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>If you don&#8217;t have an FT account: the article reviews a 2008 book called <a href="https://www.amazon.com/Tulipmania-Money-Honor-Knowledge-Golden/dp/0226301265/ref=sr_1_1?dib=eyJ2IjoiMSJ9.egsR4X33ZMK6EHehTU8vHg.P8ZWhiCs0HiXkSI4OL3F0y57YWBbjKvnuKiereM1I0o&amp;dib_tag=se&amp;keywords=Tulipmania%3A+Money%2C+Honor%2C+and+Knowledge+in+the+Dutch+Golden+Age&amp;qid=1728003783&amp;sr=8-1">Tulipmania</a>, the  thrust of which is to historically debunk the Tulip Mania story. There&#8217;s also another article in <a href="https://www.smithsonianmag.com/history/there-never-was-real-tulip-fever-180964915/">the Smithsonian</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>The word &#8220;excess&#8221; should be carefully considered, since you could argue that market participants were reacting rationally to information they believed to be truthful. The problem is that the information was false!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>These present significant counterexamples to the notion that long-term investors entering hype cycles tend to do well. Though these innovations created immense value, speculation was so great, and the rate of corporate death was so great, that most early investors lost money. There were over 1,500 automobile manufacturers in the early 20th century, and the vast majority of them went out of business. At that scale, it&#8217;s very hard to assemble a diversified portfolio such that you can hit the 1-in-20 or 1-in-100 jackpot the way that Dot-Com investors did (holding many positions that went to zero, and Amazon that returned their portfolio 100+ times over).<br><br>My perspective on this is that these investment manias really suffered from insufficiently regulated markets. Fraud was everywhere; this was long before vanilla financial regulations such as fiduciary responsibilities to shareholders and insider trading laws. I suspect that had the law been more mature, there would&#8217;ve been much less fraud, and investors generally would&#8217;ve done better for that reason.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>This is pretty much the thesis of Alisdair Nairn&#8217;s <a href="https://www.amazon.com/Engines-That-Markets-Alisdair-Nairn/dp/0857195999/ref=sr_1_1?crid=D6SY71IFU3KP&amp;dib=eyJ2IjoiMSJ9.iHJx8UhVLycdsvLYKTW53PQmOLgToEKr33Z8ZmuIRI_GjHj071QN20LucGBJIEps.d1ZIdR_zFENmN6SpiKK1SMGoMIAkqWCfM2SNjm1YaoY&amp;dib_tag=se&amp;keywords=engines+that+move+markets+by+a.+nairn&amp;qid=1728064824&amp;sprefix=engines+that+move+markets%2Caps%2C252&amp;sr=8-1">Engines that Move Markets</a>, which has been on my mind as I&#8217;ve written this essay. Nairn&#8217;s framing is that when a new, disruptive technology enters a market, it&#8217;s easy to spot the losers, but hard to figure out who will be the winner.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>$64B in 2024 dollars. All figures in this section will be in 1994 dollars, unless otherwise noted.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>The accounting here gets pretty fussy &#8212; over $1T of the capital raised went toward acquiring competing telecom firms, not toward building out infrastructure.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>The Worldcom fraud played a big role, because Worldcom&#8217;s market activity was setting the pace for all competitors. Without Worldcom&#8217;s (and Enron&#8217;s) presence or malfeasance in the market, the market would likely not have had as much leverage or subsequent volatility.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Other than Amazon, no big tech company carries net debt.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Another reason that I&#8217;d like to see more fundraising activity for AI companies transition to public markets is that so much innovation is locked in private markets at the moment, and therefore it&#8217;s hard to build a well-diversified portfolio.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#21: Everything we know about LLMs doing Arithmetic]]></title><description><![CDATA[If you&#8217;re interested in large language models, then you should care about their ability to do arithmetic.]]></description><link>https://essays.johnloeber.com/p/21-everything-we-know-about-llms</link><guid isPermaLink="false">https://essays.johnloeber.com/p/21-everything-we-know-about-llms</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 01 Sep 2024 13:13:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/422e6156-4293-4534-a347-3c6a8cce547d_1536x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you&#8217;re interested in large language models, then you should care about their ability to do arithmetic. On the road to AGI, arithmetic problems provide a neat microcosm of more general multi-step reasoning problems. Here&#8217;s why:</p><ol><li><p>Arithmetic problems are simple examples of reasoning tasks in general. </p></li><li><p>Arithmetic problems can be solved by simple algorithms, rules that must be consistently applied. You can arbitrarily increase the difficulty of these arithmetic problems, i.e. the number of times that a rule must be applied.</p></li><li><p>This means that arithmetic provides good windows into both <em>single-step</em> and <em>multi-step</em> (i.e. chain-of-thought) reasoning tasks. We can evaluate both individual calls to LLMs, as well as compositions of such calls.</p></li><li><p>State-of-the-art LLMs still struggle to do simple arithmetic problems, even while LLMs have scaled up dramatically in size and standard evaluation benchmarks.</p></li><li><p>Solutions are easy to evaluate.</p></li></ol><p>In this article, I try to summarize everything that we know about LLMs doing arithmetic. I&#8217;ll point you to all the interesting papers that I&#8217;m aware of, and draw some observations of my own.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://essays.johnloeber.com/subscribe?"><span>Subscribe now</span></a></p><h3>1. The Problem</h3><p>The first thing you&#8217;ll notice is that LLMs do great on short arithmetic problems, but  struggle with long ones. For example, if you ask an LLM to add three two-digit numbers, it&#8217;ll do fine, but it will fail for large numbers or long sequences (e.g. adding many small numbers). I shared some experiments on this topic <a href="https://loeber.substack.com/p/16-notes-on-arithmetic-in-gpt-4">in a prior blog post</a>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xG6A!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xG6A!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 424w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 848w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1272w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xG6A!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png" width="1262" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1262,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xG6A!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 424w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 848w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1272w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These results tend to surprise people: LLMs seem smart and quite capable of abstract reasoning over complex texts. Why do they struggle with basic arithmetic? For example, in my experiments, I found that failures for large-number additions almost always occurred in the same way, with one particular digit being misplaced, usually in the thousands-place of the number. Why?</p><h3>2. Issues with Tokenization and Position</h3><p>You have to keep in mind that LLMs do not see text the same way you do. LLMs use two key abstractions for working with text:</p><ul><li><p><strong>Tokenization:</strong> input words (or numbers) are converted into <em>tokens </em>that the LLM processes<em>. </em>For example, the number &#8220;483,647&#8221; could be converted:</p><ul><li><p>Into seven tokens <code>[&#8220;4&#8221;, &#8220;8&#8221;, &#8220;3&#8221;, &#8220;,&#8221; &#8220;6&#8221;, &#8220;4&#8221;, &#8220;7&#8221;]</code></p></li><li><p>Or into two tokens <code>[&#8220;483,&#8221;, &#8220;647&#8221;]</code></p></li><li><p>Or into three tokens <code>[&#8220;483&#8221;, &#8220;,364&#8221; &#8220;7&#8221;]</code> </p></li><li><p>Or in many other ways.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> Of course, your tokenization scheme impacts how the LLM interprets the inputs.</p></li></ul></li><li><p><strong>Positional Encodings: </strong>every token is related to the ones that came before and after. This allows the LLM to maintain an understanding of how the textual statements are ordered, and the meaning inherent in their relative positions. These encodings are usually one-dimensional, representing the text like one long line. </p></li></ul><p>Your intuition might suggest that tokenization and positional encoding that works well for <em>words</em> may not work well for <em>math</em>. With words, there&#8217;s some leniency on precision &#8212; you can understand what someone means even when their writing is riddled with typos &#8212; but for arithmetic, you need to get every single character right.</p><p>Doing that can be harder than it sounds. Consider, as an oversimplified example,  adding three numbers, written below in black. If you&#8217;re not allowed to rewrite the problem, then it&#8217;s tedious: following the grade-school <a href="https://en.wikipedia.org/wiki/Addition#Carry">right-to-left algorithm for addition</a>, you have to figure out that positions<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> 8, 17, and 26 all correspond to each other, add those up, then add positions 7, 16, 25, etc. and then add the intermediate results. Relative to the length of the problem, this requires a lot of working memory, and it&#8217;s easy to make mistakes.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!roUR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!roUR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 424w, https://substackcdn.com/image/fetch/$s_!roUR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 848w, https://substackcdn.com/image/fetch/$s_!roUR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 1272w, https://substackcdn.com/image/fetch/$s_!roUR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!roUR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png" width="349" height="418.39181286549706" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:615,&quot;width&quot;:513,&quot;resizeWidth&quot;:349,&quot;bytes&quot;:32331,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!roUR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 424w, https://substackcdn.com/image/fetch/$s_!roUR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 848w, https://substackcdn.com/image/fetch/$s_!roUR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 1272w, https://substackcdn.com/image/fetch/$s_!roUR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30e54ac0-f1e7-4f88-a564-f4a8ccfa3034_513x615.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3>3. Papers that Attack these Problems</h3><p>Several researchers have experimented with clever ways to solve these issues. Let&#8217;s go through them!</p><p><strong><a href="https://arxiv.org/pdf/2403.05845">Reverse That Number! Decoding Order Matters in Arithmetic Learning</a></strong></p><p>If the problem in adding long numbers is that they cannot be naively added left-to-right the way you read them, but rather have to be <em>aligned right</em> to add the digits column-by-column, then why don&#8217;t you teach the LLM to reverse the digits and then add them left-to-right? This technique works well, improving on state-of-the-art accuracy by about 11%.</p><p><strong><a href="https://arxiv.org/pdf/2402.14903">Tokenization Counts: the Impact of Tokenization on Arithmetic in Frontier LLMs</a></strong></p><p>Instead of reversing the numbers, you can also adjust the tokenizer.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> In this paper, the authors enforce commas to tokenize long numbers more reliably, and run the tokenization process from right-to-left rather than left-to-right. This performs much better: by contrast, left-to-right tokenization yielded systematic errors in certain digit positions. The authors improve on state-of-the-art accuracy by about 14%.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> </p><p><strong><a href="https://arxiv.org/pdf/2311.14737">Positional Description Matters for Transformers Arithmetic</a></strong></p><p>In this paper, the authors find several results:</p><ol><li><p>They asked whether transformers <em>trained on arithmetic tasks in isolation</em> could transfer this knowledge to arithmetic embedded in natural language. They found that just training on isolated arithmetic data isn't enough.</p></li><li><p>They changed how addition tasks were represented, e.g. by adding spaces randomly between digits. They found that this helped models better generalize to adding longer numbers than what it had seen during training.</p></li><li><p>They found that padding all numbers to the same length with leading zeroes, and using a reversed-digit approach (similar to other authors), would improve performance on multiplication tasks.</p></li><li><p>They found that <strong>introducing random noise to the positional encodings</strong><em> </em>helped the model avoid overfitting to specific positions, and instead encouraged the model to learn more generalizable patterns. To me, this is the key result of the paper, though they don&#8217;t specify what percentage improvement this provides over the state-of-the-art.</p></li></ol><p>Combining these techniques, they achieved <strong>100% accuracy</strong> for multiplication tasks of 12-digit numbers, and <strong>99% accuracy</strong> for 15-digit multiplication. This was a dramatic improvement over the baseline, which is roughly 0% after 5-digit multiplication.</p><p><strong><a href="https://arxiv.org/pdf/2405.17399">Transformers Can Do Arithmetic with the Right Embeddings</a></strong></p><p>In this paper, the authors replace the positional encodings altogether. Instead, they attach a <em>positional embedding</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> to each token, which tells the LLM the position of each digit relative to the start of the number. Using these so-called &#8220;<strong>Abacus Embeddings</strong>&#8221; delivered another <strong>colossal improvement over the state-of-the-art</strong>: when trained on adding 20-digit numbers and then tested on adding 100-digit numbers, they achieved 97.9% accuracy whereas prior models were in the 20-30% range.</p><p>Some additional tweaks &#8212; Input Injection<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> and Looped Transformer Layers<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> &#8212; added further improvements, <strong>raising the accuracy up to 99.1%</strong>.</p><h3>4. Other Useful Work</h3><p>We&#8217;ve seen that architectural tweaks can significant improve LLM performance for addition. There are more texts that can help us understand the underlying dynamics.</p><p><strong><a href="https://arxiv.org/pdf/2307.03381">Teaching Arithmetic to Small Transformers</a></strong> </p><p>This paper stands out to me for two observations:</p><ol><li><p>When teaching addition to their their transformer, they observe a &#8220;phase transition&#8221; from poor performance to nearly perfect performance. Unlike many other cases in machine learning, the model isn&#8217;t gradually improving. It accumulates information until it reaches a critical point at which it suddenly understands the underlying rule.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HM5K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HM5K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 424w, https://substackcdn.com/image/fetch/$s_!HM5K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 848w, https://substackcdn.com/image/fetch/$s_!HM5K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 1272w, https://substackcdn.com/image/fetch/$s_!HM5K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HM5K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png" width="476" height="355.37481031866463" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:984,&quot;width&quot;:1318,&quot;resizeWidth&quot;:476,&quot;bytes&quot;:128988,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HM5K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 424w, https://substackcdn.com/image/fetch/$s_!HM5K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 848w, https://substackcdn.com/image/fetch/$s_!HM5K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 1272w, https://substackcdn.com/image/fetch/$s_!HM5K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F02978d73-b2a5-4f91-92a9-83d24caa0983_1318x984.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div></li><li><p>The authors suggest that the &#8220;phase transition&#8221; occurs because addition can be represented as low-rank <a href="https://en.wikipedia.org/wiki/Matrix_completion">matrix completion</a>. Matrix completion has a similarly sharp threshold, where if you haven&#8217;t seen enough examples, it&#8217;s very hard to correctly complete such matrices, but once you&#8217;ve seen enough, you can complete them almost perfectly. The authors suggest that both addition and matrix completion have a sharp transition in capability after seeing similar amounts of training data. The authors conclude that this must be the same dynamic.</p></li></ol><p><strong><a href="https://x.com/shinboson/status/1792420144511431099">Shin Boson&#8217;s Twitter Thread</a></strong></p><p>This is not a paper, but a Twitter thread where Shin investigates how LLMs multiply, reaching some of the same observations as the other papers above, with some neat illustrations. He makes two additional observations: </p><ol><li><p>The fact that LLMs represent positional encodings one-dimensionally is a limitation, since we usually view mathematical text in two dimensions. Consider the example from earlier, where we added three numbers, and see how much easier it is to run the same operation when we rewrite it in two dimensions:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!FZxB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!FZxB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 424w, https://substackcdn.com/image/fetch/$s_!FZxB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 848w, https://substackcdn.com/image/fetch/$s_!FZxB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 1272w, https://substackcdn.com/image/fetch/$s_!FZxB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!FZxB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png" width="357" height="286.7134502923977" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:412,&quot;width&quot;:513,&quot;resizeWidth&quot;:357,&quot;bytes&quot;:20983,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!FZxB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 424w, https://substackcdn.com/image/fetch/$s_!FZxB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 848w, https://substackcdn.com/image/fetch/$s_!FZxB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 1272w, https://substackcdn.com/image/fetch/$s_!FZxB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F540aa6a9-d264-4826-8e07-e49fa7bac9a5_513x412.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I suspect there are certain types of math problems where the relevant positions are not just what&#8217;s to the left or right of a given token, but also what&#8217;s above or below. Linear algebra problems involving matrices seem like a good example. For these problems, I wonder if two-dimensional positional encodings could help. Two-dimensional positional encodings would be much closer to how humans see the world, anyway.</p></li><li><p>He ascribed the failure-states of LLMs to an extremely short working memory, which is worth digging into further. There&#8217;s an interesting open question as to whether LLMs suffer from:</p><ol><li><p>Not having enough working memory to store and transform intermediate results &#8212; for example, &#8220;carrying&#8221; in addition, or adding up intermediate sums for multiplication;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> </p></li><li><p>Not being able to apply enough attention to every token in the sequence, thereby losing some information that then causes arithmetic error. (There&#8217;s a related paper, <em>Working Memory Capacity of ChatGPT, </em>that&#8217;s not hugely informative, but I&#8217;ll cover it in the Appendix.)</p></li></ol></li></ol><h2>5. Conclusion</h2><p>Over the last two years, many people have claimed that LLMs can&#8217;t do math. I&#8217;ve seen some folks even suggest that this is a limitation of the transformer architecture; that it can&#8217;t learn to generalize arithmetic operations. This seems false. </p><p>Run-of-the-mill LLMs appear hobbled in their arithmetic accuracy due to how they implement tokenization and positional encodings. We&#8217;ve seen several approaches that adjust those, and then deliver near-perfect accuracy. In particular, Abacus Embeddings seem to generalize well, and I am very curious how far their length generalization goes. An LLM should be able to learn the addition rule with limited data &#8212; could we train a model on adding 10-digit numbers and then have it accurately add 10,000-digit numbers?</p><p>Going further, the key open question is whether an LLM can learn arithmetic <em>perfectly</em>. LLMs deliberately use some randomness: in spite of this, is it possible to train them to execute arithmetic computations deterministically? Going even further, would it be possible to formally prove <em>correctness</em> of such networks in the same way that programs can be <a href="https://en.wikipedia.org/wiki/Correctness_(computer_science)">provably correct</a>?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Such a bridging between deterministic and non-deterministic behavior would be not only be extremely interesting, but would powerfully extend the capabilities of such neural nets.</p><p>I&#8217;ll leave you with a few more questions that are on my mind:</p><ol><li><p>Are there benefits to using two-dimensional positional encodings? Not just for arithmetic problems, but very generally. Naively, I think two-dimensional positional encodings are more representative of how humans view the world.</p></li><li><p>To what extent is dilution of self-attention a fundamental limitation of transformer architectures, and can it be overcome?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p></li><li><p>The examples of length generalization in these papers have been very modest. Nobody&#8217;s tried to add numbers with 100,000 digits. Are further architectural changes needed to scale? It&#8217;s not clear from the research specifically <em>how</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a><em> </em>these nets are handling<em> </em>intermediate computation: e.g. adding numbers requires some &#8220;carrying&#8221; from one place to the next, multiplying numbers requires adding intermediate computational results. How does this scale? I thought the use of looped layers in the Abacus Embeddings paper was interesting, and wonder if recurrent techniques<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> could help scale these nets to arbitrary input lengths.</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h3>Appendix 1: Other Papers</h3><p>Below are some other papers that I&#8217;ve read and that you might consider part of the literature. I haven&#8217;t listed them in the sections above because their findings are more tangential to this discussion; most of them are about limitations that now appear at least partially overturned. (They do remain relevant insofar as many questions remain about limitations for larger scales of inputs.)</p><p><strong><a href="https://arxiv.org/pdf/2407.17963">Relating the Seemingly Unrelated: Principled Understanding of Generalization for Generative Models in Arithmetic Reasoning Tasks</a></strong></p><p>The authors assert that other papers have focused too much on architecture (e.g. positional encoding) and not enough on the fundamental difficulties of doing arithmetic. They describe a number of properties that make arithmetic tricky, focusing on multiplication and modular arithmetic. However, at least for multiplication, these difficulties are probably overstated, since architectural solutions in some of the papers above brought multiplication tasks to near-perfect accuracy.</p><p><strong><a href="https://arxiv.org/pdf/2305.18654">Faith and Fate: Limits of Transformers on Compositionality</a></strong></p><p>The authors suggest that LLMs struggle with arithmetic because they memorize simple patterns, rather than truly learning the rules of arithmetic: they claim that transformers <em>fail to generalize</em> these patterns. Errors in reasoning compound as problems become more complex. This may all be true for naive LLM approaches, but the papers above have shown that LLMs can, with architectural tweaks, actually generalize these patterns to a much more significant extent.</p><p><strong><a href="https://arxiv.org/pdf/2402.09371">Transformers Can Achieve Length Generalization But Not Robustly</a></strong></p><p>Similar to the <em>Faith and Fate</em> paper, this one focuses on limitations. They observe a number of issues with getting transformers to generalize learned patterns beyond ~2.5x the input length. However, this paper similarly looks overturned, e.g. Abacus Embeddings were able to generalize well beyond this suggested limitation.</p><p><strong><a href="https://arxiv.org/pdf/2310.16028">What Algorithms can Transformers Learn? A Study in Length Generalization</a></strong></p><p>This paper studies what algorithms transformers can learn and generalize, i.e. perform on inputs longer than what they&#8217;re trained on. To that effect, the paper introduces a domain-specific programming language, <strong>RASP-L</strong>, which contains operations designed to mimic those that are natural to transformers, e.g. creating attention matrices and applying them to sequences of tokens. </p><p>The central idea is that if a problem or operation can be expressed as a RASP-L program, then a transformer is more likely to learn and generalize the underlying pattern effectively. However, the authors found it difficult to express arithmetic operations in RASP-L. </p><p><strong><a href="https://arxiv.org/pdf/2305.03731">Working Memory Capacity of ChatGPT: An Empirical Study</a></strong></p><p>After all, why is it that LLMs are so good at short arithmetic problems and then get confused when the problems get long? Surely the difficulty isn&#8217;t just in &#8220;carrying the 1s&#8221;? This paper runs memory tests (<a href="https://en.wikipedia.org/wiki/N-back">n-backs</a>) with ChatGPT. The &#8220;memory test&#8221; is a loose analogy here since ChatGPT doesn&#8217;t really have <em>memory</em> but rather the user is supplying longer prompts of historical context on each iteration.</p><p>In any event, the experiments found that ChatGPT struggled with tasks beyond n=3, similar to humans. The paper posits that LLMs struggle to maintain context over long input sequences, specifically because self-attention is computationally expensive to maintain. As the sequence gets longer, the self-attention is spread thinner across it: each token gets less focused attention on average, effectively causing the model to lose track of certain tokens.</p><h3>Appendix 2: Chain-of-Thought Reasoning</h3><p>If you want LLMs to do arithmetic, then you may consider chain-of-thought reasoning to sidestep the length generalization issues. </p><p>I haven&#8217;t yet covered this because I don&#8217;t think it&#8217;s interesting anymore: the technique is well-understood. Of course you&#8217;ll get better performance from an LLM when trying to do arbitarily complex tasks if you break them down into tiny sub-tasks and then solve them one by one. (You could go one step further and just have the LLM call a Python console&#8230;) I&#8217;m interested in whether an LLM can learn <em>perfectly</em> an arithmetic operation.</p><p>Regardless, in the spirit of covering relevant papers, consider reading these:</p><ul><li><p><a href="https://arxiv.org/pdf/2112.00114">Show your work: scratchpads for intermediate computation with language models</a></p></li><li><p><a href="https://arxiv.org/pdf/2201.11903">Chain-of-Thought Prompting Elicits Reasoning in Large Language Models</a></p></li><li><p><a href="https://arxiv.org/pdf/2406.12288">An Investigation of Neuron Activation as a Unified Lens to Explain Chain-of-Thought Eliciting Arithmetic Reasoning of LLMs</a></p></li></ul><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>You may have noticed that the comma was being tokenized: this means that you might expect an LLM to not interpret &#8220;483,647&#8221; and &#8220;483647&#8221; the same way. The textual representation is different, and therefore the resulting tokens will also be different.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>These positions are indices. Importantly, positional encodings are not indices! I&#8217;m oversimplifying here to try to supply some intuition. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>In general, the process of tokenization remains <a href="https://x.com/karpathy/status/1789590397749957117">overlooked</a> for improving LLMs.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>They mention an improvement of <em>up to 20% </em>over the state-of-the-art in other sections of the paper.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>What&#8217;s the difference? <em>Positional embeddings</em> are learned representations (i.e. inferred as the model runs) that explicitly encode the position of elements in a sequence, while <em>positional encodings</em> are typically fixed patterns added to inputs to help the model understand their order.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Input Injection reintroduces the original input embeddings into each layer of the transformer model, such that positional information about the input is preserved throughout processing. Maintaining this direct connection to the original input means that the model can better handle tasks that require price positional or contextual understanding.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>You may be aware that a neural network passes data through &#8220;layers&#8221; of neurons, transforming the data at each step. Looped Transformer Layers refer to a technique where instead of having multiple unique layers in sequence, a single layer (or a small set of layers) is applied repeatedly with shared parameters. This iterative application can help the model effectively perform multi-step reasoning tasks &#8212; it&#8217;s similar to applying the same reasoning step several times over.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I&#8217;m working on another experiment to answer this question, and may have a blog post coming out if the results are interesting at all.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>This is a largely unsolved problem in computer science, and would be hugely ambitious to attempt.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>There&#8217;s a significant amount of ongoing research on this topic, which is both fascinating and too voluminous to include here.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>It would be interesting to inspect the neuron activations!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Note that looped layers, or recurrent techniques more generally, can conceptually bring models closer to learning deterministic operations. This is because they involve applying the same set of rules or transformations repeatedly, which mirrors the step-by-step process of deterministic algorithms.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#20: No More EU Fines for Big Tech]]></title><description><![CDATA[Over the last decade, the EU has taken an aggressive regulatory approach toward Big Tech. Citing concerns about privacy and monopolization, they have fined the likes of Google and Meta for billions of dollars, and enacted countless pieces of regulation.]]></description><link>https://essays.johnloeber.com/p/20-no-more-eu-fines-for-big-tech</link><guid isPermaLink="false">https://essays.johnloeber.com/p/20-no-more-eu-fines-for-big-tech</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Tue, 23 Jul 2024 18:18:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4044e262-946b-47ef-b27b-e2985ba52579_3072x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gChG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gChG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 424w, https://substackcdn.com/image/fetch/$s_!gChG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 848w, https://substackcdn.com/image/fetch/$s_!gChG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!gChG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gChG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png" width="1456" height="728" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:728,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7775807,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gChG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 424w, https://substackcdn.com/image/fetch/$s_!gChG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 848w, https://substackcdn.com/image/fetch/$s_!gChG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!gChG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F81879a67-cb4e-4401-b7ab-f8945a88655a_3072x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The EU takes an aggressive stance toward American Big Tech. Citing concerns about privacy and monopolization, it has enacted countless regulations, and fined Google and Meta for billions of dollars. In the last six months, EU regulators have kicked this motion into overdrive: </p><ul><li><p>They adopted the <a href="https://digital-markets-act.ec.europa.eu/about-dma_en">Digital Markets Act</a> (DMA), which they used to immediately <a href="https://www.theguardian.com/business/2024/mar/25/eu-investigates-apple-meta-google-alphabet-digital-markets-act">open investigations</a> into Apple, Google, and Meta. </p></li><li><p>They adopted the <a href="https://en.wikipedia.org/wiki/Artificial_Intelligence_Act">AI Act</a> to constrain AI applications.</p></li><li><p>They slapped <a href="https://www.wired.com/story/apple-spotify-2-billion-fine-eu/">Apple with a $2B fine</a>. </p></li><li><p>In July alone, they opened <a href="https://www.reuters.com/technology/french-antitrust-regulators-preparing-nvidia-charges-sources-say-2024-07-01/">antitrust proceedings against Nvidia</a>, <a href="https://en.agcm.it/en/media/press-releases/2024/7/PS12714">antitrust investigations</a> into Google, and <a href="https://x.com/ThierryBreton/status/1811699711591489637">threatened to fine Twitter</a> over seemingly-trivial Blue Checks.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p></li></ul><p>The posture is clear: the EU is not satisfied with the bloodletting-to-date and is raising its demands from Big Tech. The AI Act and DMA both may assess penalties as a percentage of <em>global turnover,</em> and are so <a href="https://daringfireball.net/2024/06/eu_reaping_what_it_sows">broad in scope</a> that European regulators are emboldened to pursue tech giants for practically limitless amounts of money.</p><p><strong>But the EU is overplaying their hand.</strong> They have been able to make their demands to-date because it is easier and cheaper for Big Tech to pay and comply than to resist. Of course, this encourages the EU to make even more demands, but eventually those will become too onerous and expensive. We&#8217;re about to be at that point. </p><p>Kissinger wrote that relations between states are best understood in terms of <strong>power</strong> and <strong>legitimacy</strong>:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> in this essay, I will argue that the EU&#8217;s actions against Big Tech lack, and have always lacked, substantial legitimacy. Furthermore, the EU vastly overestimates its own power in this relationship: when push comes to shove, it is Big Tech that actually holds the cards, and the EU would do well to change its course. Indeed, the current course cannot, and will not continue.</p><h2>A Brief History of Fines</h2><p>For quick background, below is a non-exhaustive list of $100M+ fines that the EU has assessed on American tech companies over the past decade:</p><ul><li><p>June 2017: Google fined $2.7B (<a href="https://en.wikipedia.org/wiki/Antitrust_cases_against_Google_by_the_European_Union">Antitrust over Google Shopping</a>)</p></li><li><p>July 2018: Google fined $5B (Antitrust over Android)</p></li><li><p>March 2019: Google fined $1.5B (Antitrust over advertising)</p></li><li><p>September 2021: Meta fined $266M (<a href="https://www.huntonak.com/privacy-and-information-security-law/irish-commissioner-fines-whatsapp-e225-million-for-gdpr-violations">GDPR over WhatsApp</a>)</p></li><li><p>July 2021: Amazon fined $887M (<a href="https://www.cnbc.com/2021/07/30/amazon-hit-with-fine-by-eu-privacy-watchdog-.html">GDPR</a>)</p></li><li><p>January 2022: Google fined $169M (<a href="https://www.cnbc.com/2022/01/06/google-hit-with-150-million-euro-french-fine-for-cookie-breaches.html">GDPR</a>)</p></li><li><p>September 2022: Meta fined $427M (<a href="https://techcrunch.com/2022/09/05/instagram-gdpr-fine-childrens-privacy/">GDPR over Instagram</a>)</p></li><li><p>November 2022: Meta fined $275M (<a href="https://www.euronews.com/next/2022/11/28/meta-hit-with-265-million-fine-by-irish-regulators-for-breaking-europes-data-protection-la">GDPR</a>)</p></li><li><p>April 2023: Meta fined $1.3B (<a href="https://www.nytimes.com/2023/05/22/business/meta-facebook-eu-privacy-fine.html">GDPR</a>)</p></li><li><p>March 2024: Apple fined $1.95B (<a href="https://www.cnbc.com/2024/03/04/apple-hit-with-more-than-1point95-billion-eu-antitrust-fine-over-music-streaming.html">Antitrust over Apple Music</a>)</p></li></ul><p>That&#8217;s a little under $14.5B over seven years, or just over $2B a year. For scale, the US exports $592B to the EU <a href="https://ustr.gov/countries-regions/europe-middle-east/europe/european-union">annually</a>, and EU-originating annual revenues for these companies sum up to over $100B.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> So, there&#8217;s still plenty of room for the EU to escalate upwards. This is what the EU appears to be preparing.</p><h2>Context on EU and US Relations</h2><p>The EU&#8217;s aggressive position on Big Tech does not exist in a vacuum. There&#8217;s important historical and contemporary context:</p><h4><strong>Asserting Leadership</strong></h4><p>Many international treaties and regulations were led by Europe. That&#8217;s true both in the distant past (see the <a href="https://en.wikipedia.org/wiki/Geneva_Conventions">Geneva</a> or <a href="https://en.wikipedia.org/wiki/Hague_Conventions_of_1899_and_1907">Hague</a> Conventions) as it is more recently. The EU led the charge on international standards for <a href="https://en.wikipedia.org/wiki/Registration%2C_Evaluation%2C_Authorisation_and_Restriction_of_Chemicals">chemical safety</a> and <a href="https://en.wikipedia.org/wiki/European_Union_Emissions_Trading_System">carbon emissions trading</a>, and its stringent regulations on consumer products, tobacco, and anti-money-laundering have set the pace for the EU&#8217;s global trading partners.</p><p>The EU is attempting to similarly set the pace with GDPR, the AI Act, the DMA, and other technology regulation. However, the international community generally isn&#8217;t following. That is an important break from history. It is no surprise that when the EU tries but fails to assert leadership, it will try again more forcefully. I read the EU&#8217;s mounting efforts to shape global norms to both assert and seize power and relevance. </p><h4><strong>Trade Generally</strong></h4><p>The US-EU trading relationship, while valuable and productive, is fraught with many sharp-elbowed negotiations at the margins. For example:</p><ul><li><p>Disputes over Boeing (US) and Airbus (EU), each accusing the other side of providing anti-competitive subsidies to their home aerospace giant, leading to billions in retaliatory tariffs, including in unrelated industries. </p></li><li><p>Disputes over a 2018 set of US tariffs on steel and aluminum imports. The EU retaliated with tariffs, again including unrelated industries.</p></li></ul><p>The US-EU trading relationship involves continuously testing boundaries and tit-for-tat exchanges. What&#8217;s different in Big Tech regulation is that it&#8217;s entirely one-sided: the EU doesn&#8217;t really have <em>Big Tech</em> of its own that the US could regulate in retaliation. Ironically, if it did, the EU&#8217;s regulatory efforts would probably seem <em>fairer</em> &#8212; just part of a tit-for-tat between roughly equally strong market participants. </p><p>The EU may view its Big Tech regulations as just another small negotiating chip in a much larger $1.3T annual trading relationship. However, the one-sided optics encourage the rest of the world to see it as unfair.</p><h2>Bad Precedent</h2><p>The EU&#8217;s framework goes so far as to assess fines on a percentage of <em>global turnover</em>:</p><ul><li><p>GDPR: up to <a href="https://gdpr-info.eu/issues/fines-penalties/">4% of global turnover</a> (top-line revenue);</p></li><li><p>AI Act: up to <a href="https://www.thebci.org/news/world-s-first-artificial-intelligence-ai-act-legislation.html">7% of global turnover</a>;</p></li><li><p>DMA: up to <a href="https://ec.europa.eu/commission/presscorner/detail/en/qanda_20_2349">20% of global turnover</a>;</p></li></ul><p>These just keep getting more expensive! The idea of issuing fines based on <em>global revenue </em>for <em>local violations of law</em> is a brazen stretch of legal convention: </p><ol><li><p>Penalties must be commensurate with damages;</p></li><li><p>Courts may assert their authority only over subjects in their jurisdiction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li></ol><p>The legal convention would be for the EU to assess fines based on EU revenues, not global revenues. <strong>Permitting fines based on global revenue would set disastrous precedent:</strong> if the EU can set fines based on global revenue, why can&#8217;t any other country? Any other big market with a little bit of leverage could try to extract a slice of the pie. Why shouldn&#8217;t India, which has ~500M Meta users, start fining Meta for 10% of its global revenue? Why shouldn&#8217;t Brazil do the same? <a href="https://www.reuters.com/technology/nigerias-consumer-watchdog-fines-meta-220-million-violating-local-consumer-data-2024-07-19/">Or Nigeria</a>? And why should they keep their fines to Big Tech? Why don&#8217;t they fine Exxon Mobil for a percentage of global revenue? </p><p>Permitting this scope would set terrible precedent, and it has no legal legitimacy. Not only must Big Tech refuse to comply, but the US must reject it as a matter of national interest and international order.</p><h2>Illegitimate Fines</h2><p>EU fines have been asserted under GDPR, and will soon be asserted under the AI Act and DMA. All three of these operate on the notion of <strong>consumer protection</strong>: that EU persons have rights, most often articulated around privacy or data, that are being violated by Big Tech. You can view this almost like a class-action lawsuit, where some compensation is sought for harm done to a large group of people.</p><p>But for a class-action lawsuit to be legitimate, it <strong>must reward the consumers</strong>! Consider the $1.3B Meta fine: the <a href="https://www.edpb.europa.eu/system/files/2023-05/edpb_bindingdecision_202301_ie_sa_facebooktransfers_en.pdf">legal decision</a> cites 309M daily active users in Europe. Assuming the EU asserts that all such persons were violated, this means the fine is equivalent to $4.21 per person. As an EU citizen, I have two problems with this:</p><ol><li><p>I have not seen a single penny of my $4.21. The EU regulators alleged that my rights have been violated, pursued a legal case on my behalf, paid the legal fees with my tax dollars, recovered the damages, and then&#8230; pocketed them? Why should the tax rate on <em>recovered damages for violations to my rights</em> be 100%?</p></li><li><p>It is difficult to square the minute damages of $4.21 with the posturing of the EU. European regulators and media constantly express incredible scorn toward Meta; the narrative is so bombastic and severe as if Meta&#8217;s transgressions were wild violations of fundamental rights. But after all this drum-beating about privacy rights and the great value of my personal data, when the damages finally got priced, it turned out all this stuff is worthless? $4.21 can&#8217;t even buy me a sandwich.</p></li></ol><h4>Fundamentally Unserious Actions</h4><p>As I wrote in a related <a href="https://loeber.substack.com/p/14-why-europe-fails-to-create-wealth">prior blog post</a>: </p><blockquote><p>If they can&#8217;t ban products, then it&#8217;s not consumer protection: it&#8217;s just wealth extraction. European politicians always drum on about supposed violations from Google and Facebook, fine them for some totally unimpactful amount of money, and then pipe down for six months before starting over again. If those politicians had real grievances, they would try to ban those products, or build local alternatives. They do neither. They&#8217;re just selectively enforcing regulation to extract what I view as a bribe to operate.</p></blockquote><p>The great discrepancy between the public posture and the minute per-capita fines suggests a triviality to the grievances pursued. Practically, it seems that regulators are pricing and pursuing very small negative externalities imposed by Big Tech firms &#8212; but under that regime, there should be higher priorities. European consumers suffer much more greatly from other firms and practices: e.g. <a href="https://www.weforum.org/agenda/2019/04/air-pollution-in-europe-is-reducing-the-average-lifespan-by-2-years/">air pollution</a> is priced at roughly <a href="https://qz.com/1424097/cutting-air-pollution-could-save-europe-up-to-775-billion-by-2025">$110B per year</a> in negative economic externalities (~55x of Big Tech fines), while the long-term loss of economic competitiveness, relatively weak tech industry, and failure to transition away from Russian oil are incalculably damaging to EU citizens. By contrast, the European Commission is dabbling in trifles.</p><p>The issue with dabbling in trifles is not just that it&#8217;s relatively unimportant and not a good use of lawmaker time, but also that trifles are plentiful: this makes it difficult to enforce the law in a consistent manner. Indeed, it encourages <em>selective enforcement</em>, which saps the appearance of legitimacy. When the EU threatened to fine Twitter over Blue Checks, that did not seem like an impartial application of the law, but as a personal grievance. Such capriciousness is no standard for any respectable authority.</p><h2>The Realpolitik of Technology</h2><p>Having gone over issues of <em>legitimacy</em>, let&#8217;s talk about <em>power</em>. Thierry Breton, the relevant leader in the European Commission, <a href="https://daringfireball.net/2024/03/more_on_the_eus_market_might">states</a>:</p><blockquote><p>And a market of 450 million customers is simply unthinkable for anyone not to be there.</p><p>Where the digital giants could pay fines of several billion dollars without batting an eye&#8201;&#8212;&#8201;by the way, when they had to pay them, after long years of procedures, which was not systematic, far from it...&#8201;&#8212;&#8201;today none of them can afford not to be in our market.</p><p>This is the reality of the balance of power of the world in which we operate.</p></blockquote><p>Mister Breton, you may be mistaken! Specifically, in two ways:</p><h4>1. Overstatement of EU Market Size</h4><p>As a global market, Europe is not as big as it used to be. And the EU is smaller yet. If you take a look at Apple&#8217;s financial statements by geography, &#8220;Europe&#8221; accounts for 25% of global revenue. But in Apple&#8217;s consolidation, &#8220;Europe&#8221; includes the UK, Norway, Switzerland, Russia, Turkey, and the entire Middle East! None of these are EU members. The <a href="https://daringfireball.net/2024/03/eu_share_of_apples_revenue">EU might account only for 7%</a> of Apple&#8217;s global revenue. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OXSI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OXSI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 424w, https://substackcdn.com/image/fetch/$s_!OXSI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 848w, https://substackcdn.com/image/fetch/$s_!OXSI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 1272w, https://substackcdn.com/image/fetch/$s_!OXSI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OXSI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png" width="641" height="360" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:360,&quot;width&quot;:641,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61958,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OXSI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 424w, https://substackcdn.com/image/fetch/$s_!OXSI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 848w, https://substackcdn.com/image/fetch/$s_!OXSI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 1272w, https://substackcdn.com/image/fetch/$s_!OXSI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc57750b-e08a-4cf0-b839-85c2675d1b53_641x360.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>7% is still a big market, but Apple is by no means dependent on it. Especially considering the exceptionally high level of operational headache in complying with European requirements, if it comes to be Apple&#8217;s view that the fines-as-percentage-of-global-revenue cannot be avoided, then it may be rational to pull out. John Gruber makes a <a href="https://daringfireball.net/2024/03/more_on_the_eus_market_might">similar argument for Meta</a>.</p><h4>2. Consumer Dependency</h4><p>Where the rubber really hits the road is that consumers are dependent on Big Tech. They like using iPhones. They like using Instagram. They like using Amazon. And European businesses everywhere use Google services. The EU would struggle severely to go up against the most popular and widely used products on the planet.</p><p>This is the nature of true power: the firms have captured the hearts and minds of the public. Even if the EU were to ban these companies, consumers wouldn&#8217;t want to lose access to these services. Their digital lives are on them. Rather than moving from WhatsApp back to Skype and losing all their contacts, they would use VPNs to access software products, and purchase Apple devices on grey markets. </p><p>Moreover, the EU doesn&#8217;t have true local alternatives. If it pursues Nvidia on Antitrust grounds: does it really want Nvidia GPUs to be replaced by, say, Huawei GPUs? Does it want Facebook to be replaced by VK? If EU regulators are motivated by concerns over unaccountable, outside influences, I might suggest that American Big Tech is still their best option. </p><h4>Power</h4><p>To Europeans, Big Tech products are hard-to-replace public services. In past nation-tech disputes, like <a href="https://www.theguardian.com/media/2021/feb/23/facebook-reverses-australia-news-ban-after-government-makes-media-code-amendments">Facebook and Google in Australia</a> (2021), <a href="https://www.politico.com/news/2023/08/14/behind-trudeaus-standoff-with-big-tech-00110924">Facebook in Canada</a> (2023), or <a href="https://thefix.media/2023/8/31/google-news-in-spain-the-legacy-of-its-shutdown-and-the-impact-of-its-reopening">Google News in Spain (2014)</a>, tech firms pulling out of markets in just a small respect generated significant backlash, and allowed the tech firms to flex negotiating leverage.</p><p>Never forget: these Big Tech products are, for the most part, cloud services. They can simply be turned off remotely, from one minute to the next. Hypothetically, if Big Tech were to coordinate, play true hardball, and shut off EU-facing products, the EU economy would grind to a halt overnight. Imagine the fallout from hundreds of millions of people suddenly not having email anymore. Without AWS, GCP, Azure, etc. <em>things simply wouldn&#8217;t work</em>. We live in a digital world; the dependencies are <em>everywhere</em>. It&#8217;d be like when OPEC constrained oil supply in the 70s, except percolating much more deeply and instantaneously throughout economies.</p><h4>Falling Behind</h4><p>Of course, it&#8217;s very unlikely for Big Tech to withdraw from the EU entirely. That would be drastic. The reality is subtler, and we&#8217;re seeing it play out right now: Meta is not making its <a href="https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations">multimodal Llama models available in the EU</a>. Apple isn&#8217;t going to bring <a href="https://www.theverge.com/2024/6/10/24175405/wwdc-apple-ai-news-features-ios-18-macos-15-iphone-ipad-mac">Apple Intelligence to the EU</a>. These are important, state-of-the-art products. If you believe at all that AI is promising or important, then EU businesses and consumers will suffer from not having access to them. </p><p>We may see the EU and US technology markets bifurcate, similar to how automobile markets have separated. Cars are widespread in Latin America, but the models are overwhelmingly older, smaller, less safe, and less fuel efficient than ones in the EU or US. This is due to cost, but all things being equal, consumers would be better off with newer models. EU consumers might similarly end up with older, lower-quality technology everywhere &#8212; but not due to cost, just due to regulation!</p><p>Because technology builds on itself, omissions compound. Maybe multimodal Llama AI is not important for EU consumers today. But what if the best radiology AI assistant gets built on Llama AI &#8212; and EU patients can&#8217;t have access? Or an EU business needs the best-in-class AI to remain globally competitive? What if Apple Intelligence can automatically call an ambulance for you if you have a heart attack &#8212; but not in the EU? If only second-tier technology is available, then after several years of falling behind, the democratic masses will realize they have been deprived of <em>real things of value</em>, and they will undo these policies.</p><h2>Suggestions</h2><p>The EU must <strong>compete</strong> or <strong>cooperate</strong>. Either one is fine. But it would be ill-advised to continue the current regime of low-grade economic harassment of its nominal allies by syphoning off fines and imposing obnoxious requirements. </p><ul><li><p>It is inappropriate for EU regulators to assess fines based on global revenue. Such precedent cannot stand. The US must, and surely will, flex its overall trading relationship with the EU to put an end to this.</p></li><li><p>The EU must recognize that the US will eventually push back, and find a way to ignore or sidestep<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> these regulations. At that point, they will cease to impose anything useful on foreign businesses, while constraining EU businesses.</p></li><li><p>The EU&#8217;s regulatory gamesmanship will also be under long-term pressure by its own citizens, who will not appreciate access to lower-quality goods and services. If the single reason why their businesses are less competitive and their quality of life is lower is not one of cost but of abstract regulation, they will push back.</p></li><li><p>The EU would do well to recognize that it is broadly dependent on the products and services of Big Tech, while being only a minority revenue originator. In my view, the true push-come-to-shove balance of power rests with Big Tech more so than with EU regulators.</p></li><li><p>The EU would be well-advised to grow its own tech industry, which it currently lacks. No Big Tech company has anything even close to an EU competitor. More free-market competition would be good, both generally and for EU consumers.</p></li></ul><p>The last point bears repeating. I view all of this regulatory squabbling as a tragic and futile misdirection of effort. The EU recognizes that it is dependent on Big Tech, recognizes that it doesn&#8217;t have any ownership, and is using a combination of fines and regulation to claw back some power in this relationship. But that&#8217;s not sustainable; the relationship will be parasitic at best, never one of equals. The practical reality is that the EU must create and foster its own tech industry to be globally competitive. That is the only way for the EU to have a seat at this table.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h6>FURTHER READING</h6><p>Some work from other writers that acted as partial inspiration for this post, which I haven&#8217;t linked to already:</p><ul><li><p>Stratechery: <a href="https://stratechery.com/2024/the-e-u-goes-too-far/">The E.U. Goes Too Far</a></p></li><li><p>Daring Fireball: <a href="https://daringfireball.net/2024/03/ec_non_compliance_investigations">European Commission Opens DMA Non-Compliance Investigations Against Google, Apple, and Meta</a></p></li><li><p>Mostly Borrowed Ideas: <a href="https://x.com/borrowed_ideas/status/1810389275021816202">Tweet</a></p></li></ul><h6>ACKNOWLEDGEMENT</h6><p>Thanks to Evan Zimmerman for helping me refine my notes on Antitrust and US-EU trade relations.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Technically under the DSA, which is <a href="https://pandectes.io/blog/the-digital-markets-act-dma-and-the-digital-services-act-dsa-critical-differences/">a close counterpart</a> to DMA. For simplicity, I refer to both of them as DMA.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>As an aside: perhaps you disagree with Kissinger&#8217;s actions in office, which remain controversial. However, his academic work on statecraft and state relations is highly insightful, independent of political ideology.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I&#8217;m avoiding a more precise figure here because it&#8217;s difficult to specifically break out EU numbers. (For example, Apple&#8217;s revenues reported as originating in &#8220;Europe&#8221; include all of the Middle East.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>For comparison: in the US, the Federal Court of the Southern District of New York (SDNY) is the key venue where global financial crime usually is prosecuted. SDNY famously asserts broader-than-usual jurisdiction and authority. Nonetheless, SDNY usually assesses damages not on a global scale, but as suffered by subjects within its jurisdiction: i.e. US individuals, corporations, or the national interest as a whole. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Future blog post coming on &#8220;sidestepping&#8221; antitrust by geography-constrained software.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#19: Waymo the Leapfrog]]></title><description><![CDATA[When people talk about self-driving cars, they usually talk about them as though they&#8217;re in the future, still a little opaque and unproven.]]></description><link>https://essays.johnloeber.com/p/20-waymo-the-leapfrog</link><guid isPermaLink="false">https://essays.johnloeber.com/p/20-waymo-the-leapfrog</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sat, 22 Jun 2024 20:43:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Mucb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When people talk about self-driving cars, they usually talk about them as though they&#8217;re in the future, still a little opaque and unproven. But as so often, the future is already here: it&#8217;s just not evenly distributed.</p><p>For the last few years, Waymo has been operating fully driverless cars in Phoenix and San Francisco. They&#8217;re commercially available. Anyone can download the app and hail one: I was recently passing through SF, and took the opportunity to make a few trips by Waymo. I found the experience hugely impressive, to the point that it made very clear what some parts of our future will look like. </p><p>In this essay, I will talk about my experience riding Waymos, predict the impacts that self-driving vehicles will have on our society, sketch out Waymo&#8217;s unit economics and discuss its competitive positioning vis-a-vis Uber and others, and finally argue that Waymos will totally overhaul how we think of public transit &#8212; offering a rare technological &#8220;leapfrog&#8221; opportunity to urban America.</p><h2><strong>The Experience</strong></h2><p>I hadn&#8217;t thought much about how it would be to ride in a self-driving car. Surely it&#8217;d be like taking an Uber, but without a driver &#8212; functionally, it&#8217;s all the same, right? </p><p>It&#8217;s actually totally different. And it&#8217;s wildly superior.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> </p><ol><li><p>The <strong>feeling</strong> of the drive is much better. Waymos accelerate very gently, and drive in a slow, defensive way such that they will never suddenly speed up or hit the brakes. In a city with lots of hills and stop-and-go traffic like SF, this is a godsend. If I were stuck with an impatient Uber driver who floors gas and brake, I&#8217;d feel nauseous after a few minutes. On the other hand, the gentleness of the Waymo means that I can read for the entire drive,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> and not even notice any traffic.</p></li><li><p>The Waymo is your own <strong>personal space</strong>. It&#8217;s really nice to have it entirely to yourself. That makes it much more relaxing than taking an Uber. When you share a small, confined space (like a car) with a stranger, it causes a very subtle stress: you&#8217;re paying attention to what they&#8217;re doing, and you think about your own actions so as not to bother them. It&#8217;s slightly taxing. Waymo obviates this.</p><ol><li><p>This is really noteworthy, because it means that <em>even having a great driver is worse than having no driver</em> in this respect. One&#8217;s first-glance intuition might be that driverless is better than a bad driver and worse than a great driver, but this is not so. Driverless is better in both cases.</p></li></ol></li><li><p>A Waymo is clearly a much safer driver than a regular human. Waymos actually keep their legal minimum distance of three feet to bicycles on the road! As a cyclist, I&#8217;ve had trucks speed past me at over sixty miles an hour with maybe six inches of clearance. I&#8217;d feel much safer sharing the roads with Waymos that are actually programmed to follow the law.</p></li></ol><p>In short: the Waymo experience is <em>great</em>. It&#8217;s hard to imagine how good it is if you&#8217;ve never been inside one, because you&#8217;ve been used to a different paradigm your entire life. Taking a Waymo is relaxing in a way that taking an Uber (let alone driving yourself and having to actually navigate traffic) simply never is. Even if a Waymo travels a little slower than a human driver, the ride is so gentle that you&#8217;re better off on net. It turns into <em>quiet personal time</em> or <em>reading time</em>, instead of being <em>waiting time</em>. </p><h2>They&#8217;ll be Everywhere</h2><p>There&#8217;s a saying in surfing: <em>slow is smooth. Smooth is fast</em>. Google has played it right: fifteen years of slow-is-smooth work on Waymo, delivering an <a href="https://waymo.com/blog/2023/12/waymo-significantly-outperforms-comparable-human-benchmarks-over-7-million/">impeccable safety record</a> and a magical experience. They are far ahead of any competition,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> ideally positioned to roll-out nationally as quickly as regulators will allow. </p><p>I&#8217;d expect for large fleets of Waymos to become commercially available in major US cities over the next few years. <a href="https://www.sfchronicle.com/bayarea/article/self-driving-cars-waymo-sf-19523750.php">Approval for roll-out</a> will become easier and easier as the cumulative safety record speaks for itself.</p><h2>They&#8217;ll be Cheap</h2><p>Waymo is still a rare luxury today. The <a href="https://www.nbcbayarea.com/investigations/googles-waymo-safety-study-on-driverless-cars/3311188/">SF fleet</a> has only about 250 cars, of which 100 are on the road at any time. This makes Waymo slightly more expensive than Uber. As more Waymo vehicles become available, you might expect Waymo to undercut Uber on pricing, and take the rest as margin. But I think Waymo will race to drive down its pricing much further. Why? Because Uber is not nearly as big as the terminal market size. For comparison:</p><ul><li><p>In the US, there are <a href="https://www.bts.gov/statistical-products/surveys/national-household-travel-survey-daily-travel-quick-facts">1.1 billion car trips every day</a>;</p></li><li><p>In the US, I estimate Uber and its competitors complete 19 million rides a day;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p></li></ul><p>In other words, there are ~58x more personal rides than rideshare rides in the US. The true TAM is that Waymo replaces all travel by car (and perhaps even all travel by bus, subway, etc.). To get there, Waymo has to bring its price down to the point that anyone with a car would pay to not have to drive themselves. Maximizing business value here is a <em>volume</em> game: the winning recipe is massively scaled deployment, with prices lower than any competition, and taking the thin margin. </p><h2>Unit Economics</h2><h4>Operating Expenses</h4><p>Not having a driver significantly lowers OpEx. Take a look at Uber:</p><ul><li><p>Uber <a href="https://uberpubpolicy.medium.com/understanding-ubers-share-of-driver-earnings-d75c4d5f6e23">claims</a> that its drivers get roughly 70% of the ride fare whereas drivers claim they get about 50% of the ride fare;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p></li><li><p>Uber&#8217;s <a href="https://investor.uber.com/news-events/news/press-release-details/2024/Uber-Announces-Results-for-Fourth-Quarter-and-Full-Year-2023/default.aspx">cost of revenue</a> is about 16% of gross bookings, and their cost of operations and support is about 1.8% of of gross bookings;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p></li></ul><p>Clearly there is an opportunity to reduce the marginal cost of operating the ride by between 50 and 70%, perhaps more if the removal of driver operations, payouts, etc. further reduces costs. But Waymo&#8217;s safety record makes for even larger savings. Fewer crashes mean:</p><ol><li><p>Less need for expensive repairs;</p></li><li><p>Lower insurance rates;</p></li><li><p>Longer vehicular lifespans over which they depreciate.</p></li></ol><p>Long-term, you&#8217;d expect the price of a Waymo ride to be the sum of: </p><ol><li><p>The cost of gas (or electricity);</p></li><li><p>The per-mile depreciation of the vehicle;</p></li><li><p>The marginal cost of any supporting software or operations (e.g. cleaning);</p></li><li><p>Whatever the average consumer is willing to pay to not drive themselves, which might be a few dollars an hour, or a little more if the experience is really great and they win back some of the opportunity cost of driving.</p></li></ol><p>I think a decent first-order approximation, before factoring in any next-generation technology advances or economies of scale, is that Waymo could bring down operating costs 70% relative to Uber.  The world looks very different when a $40 half-hour Uber ride suddenly becomes $13 &#8212; that&#8217;s a lot of increased mobility.</p><h4>Capital Expenses + Payback Period</h4><p>Waymo vehicles are not cheap! Each one costs about <a href="https://blog.dshr.org/2023/11/robotaxi-economics.html">$200K</a> to deploy. Recapturing this CapEx is a tall order, and it makes Waymo&#8217;s prospects less clear.</p><p>Suppose a Waymo vehicle runs a gross profit of $10 per hour,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> and it drives for 20 hours a day on average. That&#8217;s $73,000 a year in gross profits. Even if the cost of deployment comes down to $150K, it takes 2.1 years to recapture the CapEx.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> If it drives an average of 25 miles per hour, then it&#8217;s putting on 182,500 miles per year, or about 383,000 miles by the time the CapEx is recouped. That&#8217;s a lot of miles! I don&#8217;t know what the life expectancy of the vehicle is after 383K miles, but it&#8217;s probably not great. Further, if any of the expensive self-driving hardware (Lidar, sensors, etc.) has to be replaced on any regular schedule, it&#8217;s possible that an individual Waymo would take many years to recapture its upfront cost, if it can do so at all.</p><p>This suggests that Waymo would have to make very big, long-term bets, risking significant CapEx with uncertain payoff profiles. Waymo might be severely unprofitable at significant scale before it arrives at viable unit economics.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> Importantly, Waymo kind of <em>has to</em> do this, because it must (1) use its current market-leader advantage to build distribution while competition is scant and (2) scale up in order to have economies of scale that can bring the costs down.</p><h2>A True Google Bet</h2><p>But never forget: Waymo is owned by Google, and making very big, very long-term CapEx bets is the Google style. This is the true nature of the moat in Google&#8217;s search engine:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> Google has invested heavily for twenty-five years in building web crawling infrastructure. If you wanted to compete with Google on its turf today,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> and create a web crawler and search engine on par with Google&#8217;s, the level of capital expenditure would render it basically impossible: it would take billions of dollars over many years of engineering <em>just to get to par</em> with Google. Then you&#8217;d have to spend tens of billions of dollars trying to take market share away from Google, and <em>only then </em>would you be in a position where you can fight a price war on ads against Google, at the end of which neither of you will make any profits. It&#8217;d be economic insanity to try. So nobody does it, and Google&#8217;s search engine enjoys a <a href="https://en.wikipedia.org/wiki/Natural_monopoly">natural monopoly</a> and prints money. </p><p>The reason to tell that story is that there&#8217;s an analogous case for Waymo. Google could perfectly rationally deploy many billions of dollars into Waymo to obtain:</p><ol><li><p>Proprietary vehicles that are a superior form factor, both in terms of rider experience and unit economics (i.e. minimizing maintenance costs);</p></li><li><p>Massive scale, creating superior ride liquidity &#8212; i.e. when a customer requests a ride, they are able to obtain one faster than with any competing service;</p></li><li><p>Superior economies of scale in purchasing, outfitting, and maintaining vehicles;</p></li><li><p>Superior technology and data, creating a better and safer drive experience;</p></li><li><p>A superior safety record translating into regulatory capture.</p></li></ol><p>Google has plenty of money: <a href="https://companiesmarketcap.com/alphabet-google/cash-on-hand/">$108B in pure cash on hand</a>, and <a href="https://www.macrotrends.net/stocks/charts/GOOGL/alphabet/free-cash-flow">$69B in free cash flows</a> for 2023. Self-driving cars are among the few markets with TAMs big enough to move the needle for Google, so it would make sense to burn a few billion dollars a year on Waymo for the next decade, and to start capturing profits <em>only once</em> the barriers to entry are too tall for competitors and Waymo is in a position of natural monopoly.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a> </p><p>There are two important notes here:</p><ol><li><p>To believe there can be a natural monopoly, you must believe this is a winner-take-all or winner-take-most market. We&#8217;ve had a great live experiment with that over the past fifteen years: even though there is little differentiation among rideshare services, it has been very hard for new ones to compete. Ten years ago, I expected many vendors to emerge, but the reality has been that acquiring customers is so expensive and the margins are so thin that Uber has been able to take over the market, with small (shrinking) slices for Lyft and for local alternatives (primarily outside the US). By analogy, this suggests that the market for self-driving vehicles will have winner-take-most characteristics.</p></li><li><p>Google has already shown tremendous long-term thinking: they have been investing in Waymo for fifteen years now. Consider the level of conviction required to keep up this level of investment for so long, especially in the early years when deployment seemed ever-so-far away. Now that the goal is in sight, massive spending is more rational than ever.</p></li></ol><h2>What Happens to Uber?</h2><p>Uber might go down as a strange story in business history. They&#8217;ve burned $31.4B in operating losses since 2014 to win this market. For many years, Uber (Travis Kalanick) recognized that human drivers were a transitional step, and that autonomous vehicles would eventually gobble up the space. TK had the right instinct to focus on Uber ATG starting at 2015, when it was still possible to catch-up. It&#8217;s not anymore. I expect that as soon as Waymo deploys a thousand cars in a big taxi market like LA, Vegas, or NYC, the public markets will put an expiration date on Uber.</p><p>Startups are a tough game, and the path from <em>disruptor</em> to <em>disrupted</em> can be surprisingly short. Uber was founded in 2009. If Waymo disrupts Uber by 2029, then it will have been a twenty-year-journey of posting huge losses to seize a market, scoring a few years of comparatively small <a href="https://www.theverge.com/2024/2/8/24065999/uber-earnings-profitable-year-net-income">operating profits</a>, and that&#8217;s it. <strong>Over a twenty-year journey, you really do have to keep innovating</strong> &#8212; in fact, it&#8217;s so long that you may be forced to bet on, and pivot, the very definition of your business. This will have been the case for Uber. TK and team saw this; I&#8217;m not sure if their successors did.</p><h2>Commutes will be back</h2><p>The once-dreaded long commute is about to come back in a big and pleasant way. I would have no issue at all sitting in a Waymo for 45 minutes each way every day. It&#8217;s just a nice time to myself that I can use to nap, work, or read.</p><p>This means that the suburbs and exurbs stand to benefit meaningfully. Working in a high-cost-of-living city and then commuting from a lower-cost-of-living suburb is a common strategy, but it comes at a certain lifestyle tax: you lose a lot of time on the commute, and you have to plan around availability of transport. Inexpensive, widely available Waymos fix this.</p><h2>Leapfrogging Public Transit</h2><p>This brings us to public transportation: so far in this piece, I have chiefly compared Waymo to Uber. But Waymo is most compelling not as an alternative to a taxi or Uber, but as an alternative to driving yourself &#8212; or to being on public transit. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Mucb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Mucb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Mucb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Mucb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Mucb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Mucb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!Mucb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Mucb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Mucb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Mucb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a3335d8-eb2a-4068-9d41-b096b27156f4_1788x1193.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Waymo becomes most interesting as an alternative to public transit. It favorably changes the economies of scale: more riders per vehicle imply better economics, and deals with local governments can stabilize Waymo financially with long-term contracts. </p><p>Waymo as public transit is particularly attractive, because virtually all cities in the US have wound up in a position where building public transportation infrastructure is well-nigh impossible. In San Francisco, it cost <a href="https://sfstandard.com/2023/09/13/san-franciscos-346m-bus-lane-just-got-more-expensive/">$346M over 6 years</a> to install a new set of north-south bus lanes on a straight stretch of two-mile road.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> In NYC, a new subway line is costing <a href="https://www.bloomberg.com/news/articles/2023-02-23/in-nyc-subway-a-case-study-in-runaway-transit-construction-costs">$2.5B per mile</a>. Trying to &#8220;build&#8221; anything, even if it&#8217;s just sectioning off an existing part of a road with some paint, invites months or years of local political debate, <a href="https://www.governing.com/assessments/the-weaponizing-of-environmental-law">environmental litigation</a>, and runaway costs far beyond any reasonable imagination.</p><p>But we have tons of roads. And thankfully, self-driving cars do not require any more  infrastructure. This sidesteps the quagmire of trying to build new public transit infrastructure, and allows us to provide great public transit in the form of driverless cars/small buses without all the administrative overhead. It&#8217;s an easy winner:</p><ul><li><p>The fact that these cars can be called by app to any destination and routed efficiently by algorithm means that they offer huge <em>accessibility </em>and <em>environmental</em> advantages over public transit that always follows a fixed route and schedule.</p></li><li><p>Many people who currently drive themselves would probably be happy to carpool in a self-driving vehicle if it&#8217;s reliable and easy, which would improve congestion.</p></li><li><p>Unlike all other public transit, maintaining the infrastructure for self-driving vehicles (simple roads) is relatively easy and inexpensive.</p></li></ul><p>It is ironic that for so long, every lane of road for cars was seen as zero-sum competition for public transit, and now we might achieve broad, high-quality public transit precisely via all the road space that we&#8217;ve already made available.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> </p><p>In fact, this might bring American public transportation to a <strong>leapfrog moment</strong>. Many pundits have lamented that developing cities elsewhere have &#8220;leapfrogged&#8221; the US on public transportation &#8212; building subways and <a href="https://www.businessinsider.com/french-california-high-speed-rail-north-africa-biden-trump-2022-10">rail networks</a> that put ours to shame. Over a hundred years ago, we built first-generation public transit.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> Over the last forty years, other countries built second-generation public transit. Now we have the opportunity as a nation to lead the world on third-generation public transit, and in that course develop products and expertise that can be exported. </p><p>More granularly, cities all over the US are about to have a fantastic opportunity to redirect budget from artificially expensive transit infrastructure projects toward driverless cars and small buses as next-generation public transit. A billion dollars doesn&#8217;t buy you a lot of subway stops these days, but it&#8217;ll probably buy you a terrific long-term, at-cost contract for the future of transportation.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h6>BEST-PRACTICE DISCLOSURES</h6><p>I don&#8217;t have any financial positions in Uber or Google other than indirect investments in their common stock via low-cost index funds.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>The best-fitting analogy that comes to mind is the iPod of the 2000s: I remember that the advertising pitch was that it was like an MP3 player that could store far more songs, but then it turned out to be a qualitatively totally different, and much more satisfying user experience. I don&#8217;t think there were many people who ever went back from iPods to MP3 players, even as the latter became equivalent on storage.</p><p>It&#8217;s tempting to draw an analogy between regular cellphones and the iPhone, but that experiential gulf was much bigger than the gulf between Uber and Waymo.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I haven&#8217;t tried, but the ride is smooth enough that I could probably work on my laptop the whole way through.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Waymo will enjoy a deep competitive moat for years to come: any competitor will have to  rush development in order to catch up to Waymo over any reasonable timeframe. Of course, rushing development means safety issues, and safety issues mean adverse regulatory actions (i.e. no roll-out). Self-driving is one of those domains where you simply have to wait (i.e. meticulously establish a safety record) for a long time to get to market. Unlike other players, Waymo has simply done all the required waiting, and that puts them in a great position of strength.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The napkin math here is:</p><ul><li><p>Uber reports completing <a href="https://therideshareguy.com/uber-statistics/">23 million rides a day</a> globally<em> </em>at ~72% market share</p></li><li><p>US/Canada revenues are $19.4B out of $31.8B globally. </p></li><li><p>Assuming revenues and ride count are directly proportional (this is false, but probably good enough for our purposes) and multiplying everything through, you get 19.48 million rides a day. </p></li><li><p>Subtract 11% because we&#8217;ve counted Canada as part of the US market and Canada has ~11% the population of the US.</p></li><li><p>Add the Taxi market, which <a href="https://www.statista.com/statistics/945037/taxis-total-ridership-us/">Statista</a> estimates at another 1.64 million rides a day.</p></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>There&#8217;s no single authoritative resource on this. It is the subject of lots of discussion on <a href="https://www.reddit.com/r/uber">/r/uber</a>, and the 50% figure is my estimate based on the (recent) self-reported anecdotes across those discussions on Reddit. Some drivers claim to receive as little as 30% of the fare, but I suspect that&#8217;s anomalous. When I ask drivers, I usually find they&#8217;re being paid ~50% of what I pay Uber.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Uber 10-Q, March 31 2024. <a href="https://d18rn0p25nwr6d.cloudfront.net/CIK-0001543151/b2b64409-f6f1-4c25-acc6-fcd5e9a74807.pdf">Link</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>This is a little over $3 gross profit per 20-minute ride, with three rides per hour. I think that&#8217;s a reasonable benchmark for what people might be willing to pay to not drive themselves (assuming a similar cost of driving themselves in terms of gas, depreciation, etc.), especially when you consider the cost of parking in many geographies.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>In actuality, it would take slightly longer if you factor in the risk-free rate as opportunity cost of that upfront investment.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>A big part of this would be the actual costs of the self-driving hardware coming down. For context, Waymo vehicles are currently based on the Jaguar I-PACE, which starts at an MSRP of $72,000, implying the self-driving hardware costs about $130,000. If Waymo managed to bring down the cost of the base car and the cost of the self-driving hardware, then the economics change significantly.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>This paragraph sketches a somewhat contrarian view &#8212; lots of people think Google&#8217;s moat is based on network effects/positive feedback loops between ads and search. While there is some benefit there, in my opinion the true moat is the enormous infrastructural barrier to entry. I may expand this argument into a full essay in the future; let me know if you&#8217;d find that interesting.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>I&#8217;m using the phrase &#8220;on its turf&#8221; specifically because one might argue that Perplexity is competing with Google &#8212; but not exactly on Google&#8217;s <em>search engine</em> turf. Rather, Perplexity is making competitive use of a new paradigm.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Earlier in this piece, I wrote: &#8220;Maximizing business value here is a <em>volume</em> game: the winning recipe is massively scaled deployment, with prices lower than any competition, and taking the thin margin.&#8221; That describes a natural monopoly.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>In fairness, a significant part of this cost is because the project is joined with replacing some sewer and water lines under the same road. But it&#8217;s still far too expensive!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>You might ask whether we will need more road space, but I doubt it. If there&#8217;s anything we have a lot of in the US, it&#8217;s roads. I actually think road capacity will significantly increase as street parking will almost entirely disappear &#8212; giving way to actual traffic &#8212; since:</p><ol><li><p>Driverless cars will be moving almost at all times;</p></li><li><p>Regular cars on average spend 92% of their lives parked (often on the street), and people who make such seldom use of their cars will probably not want to own them anymore when they have Waymo or equivalent public transit at their disposal.</p></li></ol></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>The New York City subway was once the greatest in the world!</p></div></div>]]></content:encoded></item><item><title><![CDATA[#18: The End of Schematic Businesses?]]></title><description><![CDATA[Many software businesses perform a schematic task: they help a user take some large, unwieldy, ever-evolving set of data, and impose some kind of schema or taxonomy on it, thereby making it manageable and useful. In the past two decades, many such businesses have become extraordinarily successful &#8212; adding up to many hundreds of billions of dollars in valuation. They come in many shapes and flavors, for example:]]></description><link>https://essays.johnloeber.com/p/18-the-end-of-schematic-businesses</link><guid isPermaLink="false">https://essays.johnloeber.com/p/18-the-end-of-schematic-businesses</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Tue, 04 Jun 2024 18:15:00 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9673b217-4330-4531-bcc4-02bf5555a938_3072x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Many software businesses perform a <em>schematic</em> task: they help a user take some large, unwieldy, ever-evolving set of data, and impose some kind of schema or taxonomy on it, thereby making it manageable and useful. In the past two decades, many such businesses have become extraordinarily successful &#8212; adding up to many hundreds of billions of dollars in valuation. They come in many shapes and flavors, for example:</p><ul><li><p>CRMs take unstructured data like emails and phone calls, and create a schematic interface to them: letting you group, filter, sort, and take actions on your contacts.</p></li><li><p>Business intelligence tools take structured data, like your application database or Google Analytics, and give you new, visual interfaces for this combined data.</p></li><li><p>Special-purpose OCR tools take raw documents and extract key data from them, exposing them by API, again turning unstructured data into structured data.</p></li></ul><p>This is a very broad category. It basically includes any kind of <em>data transformation </em>task, which is common across all kinds of management systems,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> document parsing tools, data visualization software, and so forth. It has been a successful industry because <em>data transformation</em> has (1) historically been tricky and labor-intensive, with many edge cases that you have to guard against and (2) usually been integrated deep in the bowels of a business: once you install Salesforce, you&#8217;re not tearing it out again. </p><p>So, here&#8217;s a question that might have seemed crazy two years ago: </p><h1><strong>What happens when data transformation is no longer valuable? </strong></h1><p>Let me take a step back and provide two examples. </p><h4>Documents and APIs</h4><p>Historically, many businesses have interacted like this:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o_lK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o_lK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 424w, https://substackcdn.com/image/fetch/$s_!o_lK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 848w, https://substackcdn.com/image/fetch/$s_!o_lK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 1272w, https://substackcdn.com/image/fetch/$s_!o_lK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o_lK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png" width="1456" height="1025" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1025,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:102237,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!o_lK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 424w, https://substackcdn.com/image/fetch/$s_!o_lK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 848w, https://substackcdn.com/image/fetch/$s_!o_lK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 1272w, https://substackcdn.com/image/fetch/$s_!o_lK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe117df11-5ba0-4240-99e5-4901d380c09f_1938x1364.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>And for a long time, the engineers&#8217; answer was: <em>these businesses need APIs</em>. Instead of having employees push documents back and forth, the systems should talk to each other directly. Many good companies have been built on this proposition.</p><p>But in a world in which OCR is really good, it may not matter much anymore. Suppose someone emails me a document that contains some useful data. I can pass the document to an application that extracts the text, has an LLM extract the relevant data, and prompts it to coerce that data into the schema my database has. It no longer practically matters whether the data comes as an e-mailed document or by API. It winds up in my database either way: effortlessly, and <em>without the two companies having to spend months agreeing on an API schema and then building out some maintenance-needy interoperability layer</em>.</p><h4>Salesforce</h4><p>Suppose you have a small team, ten thousand clients, and a couple hundred thousand emails across all those clients. The conventional solution to keep track of everything is a CRM like Salesforce.</p><p>But a couple hundred thousand emails isn&#8217;t actually very much data. You can put all those emails into a database, compute the <a href="https://en.wikipedia.org/wiki/Word_embedding">embeddings</a>, and then run <a href="https://en.wikipedia.org/wiki/Prompt_engineering#Retrieval-augmented_generation">RAG</a> across that. None of this is hard, and LLMs then allow you to query your dataset:</p><ul><li><p>Where is Bryce on the deal with the Fisher Account?</p></li><li><p>How are we doing with Van Patten, do we need to follow-up? </p></li><li><p>What did I talk about last with McDermott?</p></li><li><p>List all the accounts that I spoke to last month, but not this month</p></li></ul><p>That functionality is similar to what Salesforce provides. Adding basic automation that handles prompts such as &#8220;Send a follow-up email to any client that I haven&#8217;t spoken with in the last 6 months&#8221; is easy. Would I rather have this, or spend months building out and maintaining a costly Salesforce implementation? I lean toward the former.</p><h3><strong>The Attack</strong></h3><p>There are three components coming together:</p><ol><li><p>Large Language Models are <em>very good</em> at data transformation;</p></li><li><p>Continuously improving data store technologies and the falling cost of compute make it feasible to run arbitrary data transformations on-the-fly;</p></li><li><p><em>Business data</em> is growing, but more slowly: documents, emails, spreadsheets, etc. just aren&#8217;t very big.</p></li></ol><p>Not too far out in the future, businesses will gain the option to choose between (1) schematic software that lets them organize their data into a very precise traditional interface, or (2) LLM applications that let them organize and interface with their data on-demand as necessary. In the near future, the schematic software might have a precision advantage over the LLM application, but as time goes on, I&#8217;d expect the LLM application to become better and better at making precisely the right transformation based on the user&#8217;s demand.</p><h3><strong>The LLM Advantage</strong></h3><p>The LLM approach has one big advantage: it&#8217;s much more flexible, so it&#8217;s not annoying to set up and maintain.</p><p>By way of example: most companies are in <em>Dashboard Hell</em>. They have a bunch of internal dashboards that provide a gazillion different views into the business data. Most employees are familiar with maybe two of these dashboards, and are otherwise totally lost when trying to get to a data-driven answer. They have to go ask someone on the data team the question, and then the data person will file a ticket and eventually return some kind of view that the employee would&#8217;ve never found. Not only does such dashboard software (Looker, Tableau, Sigma, etc.) run you <em>at minimum</em> $10,000 a year, but because they also require people to continuously maintain them<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> and help regular employees get answers from them, the fully-loaded cost easily and quickly climbs into six figures a year and beyond. </p><p>But the competition is coming! There are now many, many startups trying to attack exactly this problem with LLMs: they enable companies to connect their database and other data sources, and then let the user ask natural-language questions. Behind the scenes, an LLM-enabled application makes sense of the schemas in real time, designs the right queries, and returns the output. That&#8217;s accessible and useful in a way that current business intelligence tools just aren&#8217;t. </p><h3><strong>My Perspective</strong></h3><p>Many years ago, I once worked for a company that had to handle and reconcile user data from a great many sources, some of which would conflict with one another. My new-engineer instinct was to carefully design precise schemas for everything that we could possibly expect, and then define carefully-thought-out rules for when to prioritize persisting which data.</p><p>More senior engineers on the team suggested a different path: they thought it was too hard to predict all the weird data schemas we might encounter, and so they designed one fairly simple, permissive schema that could easily accommodate all such data, and then they put all their effort into a fast <em>harmonization algorithm</em> that would run on the fly and pick out the best pieces of data in response to a query.</p><p>I often think about <em>harmonization.</em> It was one of those classic don&#8217;t-let-perfect-be-the-enemy-of-good techniques that make for practical engineering: instead of building a brittle, special-cased system, build something that&#8217;s easy to maintain, works well in the vast majority of cases, and then you can tune performance up over time. You&#8217;re not going to have perfect accuracy up-front, but for most applications, you don&#8217;t really need perfect accuracy and the trade-off is worth it.</p><p>I suspect that there is a large number of software businesses that constitute <em>useful but brittle, special-cased systems </em>for schematizing data<em>,</em> and that they will soon give way to LLM applications simply sitting right on top of the data, harmonizing on the fly. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Applicant tracking systems, customer relationship management systems, agency management systems, document management systems, etc. Even project management software like Asana or Linear is about data transformation in the sense that it gives me and my team a place to clearly organize, under a well-defined schema, project management items that would otherwise be scrambled across many emails, Discord messages, and Google documents.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For example, due to schema changes in the underlying database.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[#17: Seriously, what is Intelligence?]]></title><description><![CDATA[We are in the earliest innings of an intelligence revolution: progress in the field of Artificial Intelligence is now rapid, and the innovations are becoming accessible to the public just as quickly. Millions of people are now using tools like ChatGPT, and may be thinking about what it would mean to have]]></description><link>https://essays.johnloeber.com/p/17-seriously-what-is-intelligence</link><guid isPermaLink="false">https://essays.johnloeber.com/p/17-seriously-what-is-intelligence</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sun, 05 May 2024 03:39:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/01b19e67-3041-4d4b-9e32-e3e21e3c4ffb_1536x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We are in the earliest innings of an <em>intelligence revolution</em>: progress in the field of Artificial Intelligence is now rapid, and the innovations are becoming accessible to the public just as quickly. Millions of people are now using tools like ChatGPT, and may be thinking about what it would mean to have <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a> in their lifetimes. This includes fearful policymakers and lobbyists who want to control and restrict AI. They make arguments about the dangers of rogue intelligence &#8212; AI escaping our control.</p><p>But these arguments tend to leave out <em>what exactly they mean when they say &#8220;intelligence&#8221;. </em>This is no coincidence, because these arguments are often <em>not actually about intelligence</em> when you examine them closely. They are about related, but distinct concepts (like consciousness or emotion) that are often conflated for anthropological reasons. In that way, because intelligence is such a nebulous concept, it is easy for people to make misleading arguments about it. My objective in this blog post is to:</p><ol><li><p>Explain why intelligence is often conflated with other concepts;</p></li><li><p>Try to disentangle intelligence from these concepts;</p></li><li><p>Discuss what the current generations of LLMs are showing us about intelligence;</p></li><li><p>Predict what the next generations of LLMs may look like, and what we may learn about human theory of mind in the process.</p></li></ol><h2>Anthropic Bias</h2><p>Out of all the life on planet Earth, we humans are spectacularly unique and capable. For example, we possess the following qualities:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p><strong>Intelligence</strong>: the ability to perceive, infer, acquire, retain, and apply information</p></li><li><p><strong>Consciousness</strong>: the awareness of our own existence </p></li><li><p><strong>Emotion</strong>: the capacity to alter mental state due to physiological change;</p></li><li><p><strong>Reason:</strong> the ability to logically draw true statements from other true statements;</p></li></ul><p>This is in contrast to all other animals, which either do not possess these qualities at all, or to vastly smaller extents.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> This leads to what some call <strong>Anthropic Bias</strong>:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> for example, if we view ourselves as the only truly intelligent creatures, and as the only truly conscious creatures, then it is tempting to think that intelligence and consciousness are tied together; perhaps even that they are the same thing. </p><p>This is the <strong>conflation</strong> that I mentioned earlier: people talk about <em>intelligence</em> when they really mean <em>reason</em> or <em>consciousness</em>. From a normal human perspective, because these qualities are so intertwined to us, they get lumped together as one. But they shouldn&#8217;t be, and contemporary progress in LLMs makes this clear.</p><h2>What We&#8217;re Learning from LLMs</h2><p>If you&#8217;ve played with ChatGPT, your immediate response was probably something like: <em>wow, this is intelligent</em>. There&#8217;s no doubt it passes a 1980s-style <a href="https://en.wikipedia.org/wiki/Turing_test">Turing Test</a>. You can ask it about the significance of the Peace of Westphalia or what a Monad is, and the answers will be better than what I can tell you. </p><p>But what&#8217;s also apparent from playing around with ChatGPT is that it doesn&#8217;t seem to meet any standard of emotion or consciousness. In fact, I&#8217;d argue it&#8217;s not even capable of reasoning: <a href="https://loeber.substack.com/p/16-notes-on-arithmetic-in-gpt-4">it fails to compute simple arithmetic</a>. Furthermore, the core architecture of ChatGPT &#8212; it&#8217;s a <a href="https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)">transformer</a>-based LLM &#8212; is already at huge scale: it&#8217;s trained on a dataset that most likely constitutes a significant percentage of all human text in existence, so it&#8217;s not clear whether scaling it up another 10x or 100x will cause its abilities to converge toward &#8220;true&#8221; reasoning. Reading the tea leaves of the <a href="https://arxiv.org/pdf/2402.03175">contemporary literature</a> reinforces my impression: precise reasoning may require some different or additional architecture.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></p><p>In the previous section, I listed &#8220;intelligence&#8221; and &#8220;reason&#8221; as separate bullet points, which may have looked unusual. This is the reason I did so: one of the things that&#8217;s been enormously surprising to me in my work with LLMs is that conventional information-learning-and-recall notions of &#8220;intelligence&#8221; may be mostly distinct from the ability to reason.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Fascinating!</p><h2>So What is an LLM?</h2><p>Well, in the context of artificial intelligence, it doesn&#8217;t really resemble humans at all. It&#8217;s a totally new thing. No consciousness, no emotion, arguably no reasoning, just pure informational intelligence in the form of a gigantic interpolative datastore. </p><p>In theory of mind, there is the popular thought experiment of a <a href="https://en.wikipedia.org/wiki/Philosophical_zombie">P-Zombie</a>, short for a &#8220;philosophical zombie&#8221;: a creature that is externally identical to a human, but does not have an internally conscious experience. For example, if you were to poke a P-Zombie with a stick, it would say &#8220;ouch&#8221; and &#8220;get that away from me&#8221; and step backwards, but it would not internally feel pain or think &#8220;I will tell my sibling about the maniac with the stick&#8221; as a person might. </p><p>Similarly, if you were to prompt ChatGPT with the phrase &#8220;your beloved family pet has died. How do you feel?&#8221; and then &#8220;now tell your teenage daughter about this,&#8221; you will get perfect human outputs: a somber reflection on mortality, an expression of grief, and a delicately worded parental message. It reads like emotion, but it isn&#8217;t. ChatGPT does not have a dog or a family or a daughter or continuously running internal dialogue,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> all of this is just P-Zombie simulacrum.</p><h2>Dismantling AI Safety Arguments</h2><p>Now that we&#8217;ve built up some precision in discussing intelligence, disentangled it from other notions like consciousness, and pointed out some of the singular weirdness and limitations in LLM behavior, we&#8217;re better-equipped to cut through the noise of some AI Safety arguments. Specifically, the broad class of argument that I&#8217;d like to address goes something like this:</p><blockquote><p>Encounters between species or civilizations with a great imbalance of power tend to end in the destruction or conquest of the weaker one. If we invent a digital super-intelligence that is hostile to us, we would be outmatched and face existential risk.</p></blockquote><p>There are a lot of subtly-buried assumptions in there that make this argument misleading:</p><ul><li><p>The analogy to inter-species or inter-civilizational conflict is clearly inappropriate. While it might be temping to look at AI through the interpretative lens of things that we know well &#8212; ourselves &#8212; it doesn&#8217;t fit. We have shown that LLMs are nothing like any existing thing.</p></li><li><p>Hostility or conflict of any kind follows some impulse for self-preservation, and acting on that, subject to an environment of scarce resources and zero-sum competition. This is natural to virtually all animals on planet Earth (including ourselves): Darwinian evolution results in creatures that protect themselves and reproduce. This is such a powerful historical filter that this trait is deeply ingrained in us, but <em>a priori </em>there is no reason why any kind of artificial intelligence &#8212; not subject to the same evolutionary filter at all &#8212; would share this characteristic.</p></li><li><p>What does &#8220;super-intelligence&#8221; mean? Would that be a thing that is extremely good at learning, retrieving, providing, and applying information? Doesn&#8217;t GPT-4 already meet that definition? The conflation is that &#8220;intelligence&#8221; here implies the capability for reasoning too &#8212; but we still seem to be very far away from that. </p></li></ul><h2>Being Precise about Intelligence</h2><p>This is not the first time in history that a widely-used concept, when closely examined, has turned out to be opaque and instead give way to an assembly of many other concepts: the ancient Greeks called everything <em>philosophy</em>; this has now been broken down into hundreds of scientific disciplines. In the 20th century, psychologists diagnosed everything as <em>schizophrenia</em>; today that&#8217;s an exceedingly rare diagnosis, but the DSM-5 lists over 300 more precise conditions. </p><p>The same may occur for intelligence: as the field of artificial intelligence keeps advancing and we work practically with more forms of AI, we will be pressed to split many of these nebulous, broad concepts up into precise characteristics. This may cause us to re-shape our own theory of mind as well. It seems likely to me that in a few years, the model of human cognition will not be as a monolith, but rather as the outcome of many distinct systems interacting in complex and perhaps surprising ways.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a></p><p>Continued research in LLMs may take us to an unprecedented position, where we will have invented these things that are hyper-intelligent in a very specific way &#8212; extraordinarily powerful at statistically compressing knowledge and returning approximations to it &#8212; but otherwise totally inert. This breaks how most people currently think about intelligence, and that&#8217;s why I wrote this blog post &#8212; it&#8217;s important to start thinking differently about these constructs if you don&#8217;t want to make premature or dumb policy decisions.</p><h2>The Intelligence Revolution</h2><p>I wrote at the start that we are in the very earliest stages of an <em>intelligence revolution</em>. What did I mean by this? </p><p>Consider the industrial revolution. Some historians define it by the inventions of certain machines and processes &#8212; but others see it as defined by a break from history: the amount of <strong>energy</strong> harnessed and consumed by humanity began to increase super-linearly per capita. The availability and energy density of fossil fuels was a tremendous unlock, probably a necessary condition for bringing our civilization to where it is today.</p><p>Similarly, I have little doubt that future historians looking back on our time will again see it as defined by a break from history: the amount of <strong>intelligence</strong> available to humanity began to increase super-linearly per capita in the early 2020s, and will have been a tremendous unlock. But it&#8217;s yet unclear <em>what exactly</em> that intelligence really comprises, or what the retrospective measurement of it will be.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This is not an exhaustive list of qualities that make us distinct or are relevant to consider when discussing intelligence. For example, notions like the capacity for the subjective-objective distinction or self-motivation are worth taking into account; they are just not as core as the main four that I mentioned above. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For example, great apes are commonly understood to exhibit some intelligent traits &#8212; just much less so than humans. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>I know that there is a book related to this topic called <em>Anthropic Bias</em> by Nick Bostrom. I have not read it. I am using the term <em>Anthropic Bias</em> strictly because it is useful and precise, not because I am trying to provide any kind of literature reference. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The connection between Vishal Misra&#8217;s paper and symbolic reasoning may not be immediately apparent. Here&#8217;s how I think about it: Misra&#8217;s paper shows that <a href="https://x.com/vishalmisra/status/1786144925311967322">LLMs are not capable of recursive self-improvement</a>. But symbolic reasoning abilities would enable self-improvement: logical reasoning enables the creation of new true statements based on existing true statements, and the refinement of internal knowledge &#8212; i.e. using reasoning to evaluate truthfulness of internal statements and the discarding of ones that are proven false. But if LLMs cannot recursively self-improve, then <a href="https://en.wikipedia.org/wiki/Contraposition">by contraposition</a> it must be true that LLMs are not capable of logical reasoning either, which would otherwise enable recursive self-improvement.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>To the core point of this article, note of course that this observation is relying on a particular definition of intelligence, which is up for debate. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>This is a total aside: I&#8217;ve been thinking about implementing LLMs with continuously running internal monologue and seeing how that changes their behavior. One of the obvious big differences between LLMs and brains is that brains are always on and subject to some quasi-random electrical perturbations, whereas LLMs lie dormant in-between function calls. If you&#8217;d be interested in putting together some after-work experiments in LLM behavior with me, please email me at contact@johnloeber.com. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Scientific findings about the impact of our microbiotic flora and <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6469458/">gut-brain axis</a> over the last decade seem like an early indicator of the type of systemic discoveries yet to come.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Perhaps implicitly, this observation is why there has been a big resurgence of interest in the <a href="https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in_the_Breakdown_of_the_Bicameral_Mind">work of Julian Jaynes</a> over the past year. I recommend reading Scott Alexander&#8217;s <a href="https://slatestarcodex.com/2020/06/01/book-review-origin-of-consciousness-in-the-breakdown-of-the-bicameral-mind/">commentary on it</a>.  </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Unlike energy, which is easily measured in Joules, it is not clear today what the best measure of this computational intelligence is. I think this is funny, because people in my field love to say they&#8217;re building &#8220;toward intelligence too cheap to meter&#8221; &#8212; what unit would you meter, anyway? Gigahertz? Floating-point operations? If it is actually too cheap to meter, the question is of course moot, but it won&#8217;t be for the foreseeable future.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#16: Notes on Arithmetic in GPT-4]]></title><description><![CDATA[A few weeks ago, I had a list of dollar amounts that I needed to sum up.]]></description><link>https://essays.johnloeber.com/p/16-notes-on-arithmetic-in-gpt-4</link><guid isPermaLink="false">https://essays.johnloeber.com/p/16-notes-on-arithmetic-in-gpt-4</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Tue, 20 Feb 2024 23:36:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4ab06030-2768-4a58-94e0-170719511a96_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago, I had a list of dollar amounts that I needed to sum up. I thought: &#8220;GPT is good at converting formats,&#8221;  and copy-pasted them into ChatGPT. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5S65!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5S65!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 424w, https://substackcdn.com/image/fetch/$s_!5S65!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 848w, https://substackcdn.com/image/fetch/$s_!5S65!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 1272w, https://substackcdn.com/image/fetch/$s_!5S65!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5S65!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png" width="1456" height="401" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:401,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:59726,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5S65!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 424w, https://substackcdn.com/image/fetch/$s_!5S65!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 848w, https://substackcdn.com/image/fetch/$s_!5S65!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 1272w, https://substackcdn.com/image/fetch/$s_!5S65!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd7e9c5d2-e59d-4a1a-969a-5f59036db9ec_1534x422.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The result looked plausible. But I had a moment of doubt: why should GPT be good at addition? So I double-checked the sum myself. It turned out that GPT was wrong; the right number was $660.44. <strong>Ever-so-slightly off.</strong></p><p>This struck me as a very strange result, for two contradictory reasons:</p><ol><li><p>It&#8217;s not surprsising that GPT is <em>wrong</em>. There&#8217;s nothing really about a language model that implies learning to sequentially apply the strict rules of arithmetic. While it&#8217;s possible to argue that GPT can reason if you give it some memory and many iterations over which its language inference turns into (fuzzy) symbolic inference,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> that&#8217;s not happening in a single ChatGPT query.</p></li><li><p>But then why is the result so close? <strong>It&#8217;s only wrong by 0.3%!</strong> Clearly there&#8217;s some kind of fuzzy symbolic inference getting made here that approximates addition pretty well.</p></li></ol><p>This piqued my interest. My hypothesis was that the GPT-4 training set was so vast that it simply included many strings of arithmetic, which could then be recited verbatim for correct results, or used as a base for some token-level inference. Especially if there&#8217;s <a href="https://en.wikipedia.org/wiki/Memoization">memoization</a> at play, then this would explain good arithmetic performance even without the LLM actually conducting symbolic reasoning. </p><p>However, that&#8217;s a fragile process. If the above was true, then GPT should become less accurate as the arithmetic expressions became longer. I was curious what the relationship would look like, so I took an evening to run some experiments. </p><h2>Experiment: Plain Arithmetic</h2><p><em>You can follow along by looking through the code and data in my <a href="https://github.com/Datamine/OpenAI-Arithmetic">open-source repo</a>.</em></p><h4>Addition</h4><p>I created sequences of random integers between 1 and 100. I varied these sequences in length from 2 numbers all the way through 24 numbers, and joined them up with addition symbols.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> I used a prompt in which I asked GPT-4 to solve the arithmetic problem and to return strictly a number. I then evaluated the GPT-generated sum against the true sum, and measured what percentage matched exactly. For every sequence length, I took 20 samples, so the chart below represents 460 datapoints.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m-vn!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m-vn!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 424w, https://substackcdn.com/image/fetch/$s_!m-vn!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 848w, https://substackcdn.com/image/fetch/$s_!m-vn!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 1272w, https://substackcdn.com/image/fetch/$s_!m-vn!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m-vn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png" width="1274" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1274,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:53165,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m-vn!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 424w, https://substackcdn.com/image/fetch/$s_!m-vn!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 848w, https://substackcdn.com/image/fetch/$s_!m-vn!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 1272w, https://substackcdn.com/image/fetch/$s_!m-vn!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5d428-13fe-4396-ba1a-784e09e8fd9f_1274x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>Subtraction</h4><p>Same methodology. I expected addition to perform better than subtraction because addition is a much more common operation, but the results were quite similar.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bky9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bky9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 424w, https://substackcdn.com/image/fetch/$s_!bky9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 848w, https://substackcdn.com/image/fetch/$s_!bky9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 1272w, https://substackcdn.com/image/fetch/$s_!bky9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bky9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png" width="1270" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1270,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:61356,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bky9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 424w, https://substackcdn.com/image/fetch/$s_!bky9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 848w, https://substackcdn.com/image/fetch/$s_!bky9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 1272w, https://substackcdn.com/image/fetch/$s_!bky9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F251a03c4-460b-42eb-8ced-6b81ddc52c60_1270x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>Multiplication</h4><p>Same methodology. Again, my guess was that accuracy would drop off faster than for addition or subtraction because there&#8217;s likely fewer such text samples online, and multiplication blows up the number of tokens: if you&#8217;re multiplying 20 numbers between 1 and 100, the result is going to be a long number &#8212; lots of digits &#8212; which increases the room for GPT to make mistakes. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!K2Xl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!K2Xl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 424w, https://substackcdn.com/image/fetch/$s_!K2Xl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 848w, https://substackcdn.com/image/fetch/$s_!K2Xl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 1272w, https://substackcdn.com/image/fetch/$s_!K2Xl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!K2Xl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png" width="1266" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1266,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:65222,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!K2Xl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 424w, https://substackcdn.com/image/fetch/$s_!K2Xl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 848w, https://substackcdn.com/image/fetch/$s_!K2Xl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 1272w, https://substackcdn.com/image/fetch/$s_!K2Xl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10611cdf-90e3-40ec-bfe9-5f70794c592d_1266x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As predicted, performance was much worse: GPT-4 had 95% accuracy for multiplying two numbers under 100, 30% accuracy for multiplying three, 5% accuracy for multiplying 4, and zero thereafter.</p><h4>Large Addition</h4><p>I wanted to further investigate the idea that GPT-4&#8217;s addition is good because there are many solved examples in its training data. I assumed that numbers under 100 are much better represented than large numbers, and was curious if performance would degrade if I used larger numbers instead. For this set, instead of generating random integers between 1 and 100, I generated random integers between 812,300 and 812,400. I figured these numbers were big enough that it&#8217;s unlikely for there to be lots of solved examples online.</p><p>The results bear this out: accuracy drops off much faster than for regular addition, but it still stands out as remarkable to me that it&#8217;s scoring 100% accuracy on 2- and 3-number sequences. (We&#8217;ll come to why this is important in a later section.)</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!RIJO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!RIJO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 424w, https://substackcdn.com/image/fetch/$s_!RIJO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 848w, https://substackcdn.com/image/fetch/$s_!RIJO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 1272w, https://substackcdn.com/image/fetch/$s_!RIJO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!RIJO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png" width="1266" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1266,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75301,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!RIJO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 424w, https://substackcdn.com/image/fetch/$s_!RIJO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 848w, https://substackcdn.com/image/fetch/$s_!RIJO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 1272w, https://substackcdn.com/image/fetch/$s_!RIJO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F40e657bb-ec9b-4a65-bcce-e5e559fa090c_1266x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h4>Addition by Sum</h4><p>We observed that changing the operator from addition to multiplication, for example,  is accompanied by degradation in performance. But what if you just change the syntax? Does &#8220;sum of 2, 3, 4&#8221; perform differently from &#8220;2 + 3 + 4&#8221;? Again, my hunch was that &#8220;sum&#8221; would perform worse because it&#8217;s a less commonly used syntax. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!xG6A!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!xG6A!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 424w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 848w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1272w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!xG6A!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png" width="1262" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1262,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:82803,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!xG6A!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 424w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 848w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1272w, https://substackcdn.com/image/fetch/$s_!xG6A!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff4fa83e0-4b95-465a-af45-c7429f120d83_1262x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It turns out that the sum syntax performs much better than regular addition. If you&#8217;re surprised, I was too!</p><p>The implication is that &#8220;+&#8221; and &#8220;sum&#8221; are not represented the same way internally by GPT-4.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> If the results were the same, or close, then there would&#8217;ve been a conclusion to draw about GPT internally transforming different syntaxes of equivalent inputs into the same format &#8212; but that is not so. This is noteworthy to me because syntax/format transformation<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> is a task that GPT is usually strong at. But I suppose that transformation for output is not the same as transformation for internal processing.</p><h2>How Far Off?</h2><p>We know that GPT becomes likelier to make an arithmetic error the longer the expression is. But how severe are these errors? The example from the very start, which motivated this entire investigation, was off by only 0.3%.</p><p>I took the Large Addition dataset (adding numbers between 812,300 and 812,400), and took a look at how far off the GPT-generated result was on average:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!lR26!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!lR26!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 424w, https://substackcdn.com/image/fetch/$s_!lR26!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 848w, https://substackcdn.com/image/fetch/$s_!lR26!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 1272w, https://substackcdn.com/image/fetch/$s_!lR26!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!lR26!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png" width="1266" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/dde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1266,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:66638,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!lR26!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 424w, https://substackcdn.com/image/fetch/$s_!lR26!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 848w, https://substackcdn.com/image/fetch/$s_!lR26!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 1272w, https://substackcdn.com/image/fetch/$s_!lR26!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdde035fd-7c1f-4e8d-b719-069057b9e7d8_1266x768.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>These are pretty close! But looking at the average % deltas obscures some of the detail. Below are the actual results for the sequence of length 24: </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MeK7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MeK7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 424w, https://substackcdn.com/image/fetch/$s_!MeK7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 848w, https://substackcdn.com/image/fetch/$s_!MeK7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 1272w, https://substackcdn.com/image/fetch/$s_!MeK7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MeK7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png" width="902" height="718" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:718,&quot;width&quot;:902,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:110310,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MeK7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 424w, https://substackcdn.com/image/fetch/$s_!MeK7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 848w, https://substackcdn.com/image/fetch/$s_!MeK7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 1272w, https://substackcdn.com/image/fetch/$s_!MeK7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F29d108dc-4506-4b2f-bb3c-bb19415e6644_902x718.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>What&#8217;s interesting is that these results are very consistent: the delta is either around 0.20% or around -16.67%. This suggests that there are two different ways that these prompts get handled by GPT. The 0.2% delta way is clearly both hallucinating but also approximating closely the results of actual<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> addition.</p><h2>Symbolic Reasoning for Addition?</h2><p>Earlier I mentioned that even in the Large Addition case, it maintained 100% accuracy for 2- and 3-integer sums. But maybe those numbers were too small? I tested with even larger numbers.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pBU7!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pBU7!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 424w, https://substackcdn.com/image/fetch/$s_!pBU7!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 848w, https://substackcdn.com/image/fetch/$s_!pBU7!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 1272w, https://substackcdn.com/image/fetch/$s_!pBU7!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pBU7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png" width="822" height="716" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:716,&quot;width&quot;:822,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:104894,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!pBU7!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 424w, https://substackcdn.com/image/fetch/$s_!pBU7!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 848w, https://substackcdn.com/image/fetch/$s_!pBU7!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 1272w, https://substackcdn.com/image/fetch/$s_!pBU7!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c99c5d9-4faf-4e72-ad15-f26912f5817e_822x716.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Most of these are right. (70% correct.) The ones that are wrong are wrong in interesting ways &#8212; row 2 is just off by four, rows 7, 9, 10, 11 all have the thousands digit too low by one. Row 17 just has one extraneous digit. This feels similar to the &#8220;How Far Off&#8221; section just earlier; where maybe there are several internal paths by which GPT may handle these computations &#8212; some of them work, and others don&#8217;t. </p><p><strong>Can I force GPT into the &#8220;right&#8221; path?</strong> My first hunch was that the &#8220;wrong&#8221; paths were probably downstream of some of the internal randomness of the LLM. There&#8217;s a way to control this: the <a href="https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature">temperature parameter</a>. </p><p>So I set the temperature to zero, and ran a 400-sample experiment for the same number range. <strong>The results improved from 70% to 93.5%. </strong>The errors were the same types as above, mostly one digit too low in the thousands place. </p><p>Could I improve it further? I wrote a &#8220;stronger&#8221; prompt admonishing the LLM to use its best technique and fiddle with the results, and again ran a 400-sample experiment. <strong>The results improved further to 97.5%</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>It appears that the LLMs contain some pathways for achieving correct addition, and we are able to (approximately) unlock them by querying the LLM in particular ways. If such correct addition is purely generated by the LLM (and not any additional logic like built-in arithmetic tables), then this suggests that the LLM&#8217;s language inference <strong>somehow is conducting actual symbolic reasoning at small scale</strong>. Not reliably all the time, but the fact that it&#8217;s doing it at all is remarkable, in and of itself.</p><p>This is a sign of what&#8217;s to come in the future. The big limitation of LLMs at present is their inability to precisely and symbolically <em>reason</em>. But if they&#8217;re able to do <em>any</em> symbolic reasoning at all, then inductively, they should be able to do <em>more </em>symbolic reasoning. This suggests to me that the transformer-LLM architecture may actually scale to true (and correct) symbolic reasoning, which I regard as pretty much the Holy Grail of this branch of machine learning.</p><h2>A Dumb Turing Machine</h2><p>We observed that LLMs appear to have some limited capability for symbolic reasoning, particularly for adding two numbers. We saw performance degrade as we  tried to add more numbers, but there&#8217;s a workaround. </p><p>I mentioned at the start that <em>you may argue that GPT can reason if you give it some memory and many iterations over which its language inference turns into (fuzzy) symbolic inference</em>. GPT will correctly compute long addition sequences, just like the ones we saw earlier, if you prompt it to proceed two numbers at a time.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!N1Rb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!N1Rb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 424w, https://substackcdn.com/image/fetch/$s_!N1Rb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 848w, https://substackcdn.com/image/fetch/$s_!N1Rb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 1272w, https://substackcdn.com/image/fetch/$s_!N1Rb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!N1Rb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png" width="1390" height="598" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/bc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:598,&quot;width&quot;:1390,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:113671,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!N1Rb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 424w, https://substackcdn.com/image/fetch/$s_!N1Rb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 848w, https://substackcdn.com/image/fetch/$s_!N1Rb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 1272w, https://substackcdn.com/image/fetch/$s_!N1Rb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fbc66faef-cd68-4b3e-ab2e-c66e30059211_1390x598.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It worked. You can check out the <a href="https://chat.openai.com/share/adecaeb8-c349-40fa-8895-621f948a64ef">live example here</a>, in which I had GPT compute the sum of a length-24 sequence &#8212; something we couldn&#8217;t do at all in the naive prompt.</p><p>This &#8220;scratchpad&#8221; technique is a popular technique in working with LLMs: you have the LLM write down intermediate results and then explicitly use those results to continue working. You can chain arbitrary scratchpad steps together, though it may require re-prompting. </p><p>Abstractly, the idea of keeping and adjusting a memory register for every computational step reminds me of a <a href="https://en.wikipedia.org/wiki/Turing_machine">Turing Machine</a>. You can imagine prompting the LLM as I did, using its language capabilities to break a symbolic reasoning task into tiny steps that it can handle symbolically, and using a memory register to store the intermediate values so it doesn&#8217;t get overwhelmed. Basically, you can prompt to implement a Frankenstein Turing Machine inside an LLM. This is sort of neat &#8212; a way to purely use LLMs to solve symbolic reasoning tasks, which is what we&#8217;re all after. But it&#8217;s also sort of dumb, because your computer is already a Turing Machine, simulating an LLM, in turn simulating a Turing Machine. My intuition is that the way to get LLMs to reason symbolically at scale is probably not by brute-forcing it via Dumb-Turing-Machine-style computation, but maybe it is. Intuitition for LLMs is tricky &#8212; I&#8217;ve been wrong several times just in the course of writing this blog post.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>Appendix Bonus Material! Prime Factorization</h2><p><em>This was another experiment I conducted that didn&#8217;t really fit well into the discussion piece above, but I thought might be of interest to some of you.</em></p><p>The theme of &#8220;close approximation&#8221; raised a question: when GPT-4 was incorrect, was this straight-forward hallucination, or something more subtle, like a token being dropped or double-counted? In my work with LLMs at <a href="https://limit.com">Limit</a>, this is something I&#8217;ve seen frequently: LLMs are biased toward interpreting the beginnings and ends of prompts, and the longer your prompt, the greater the probability that instructions from the middle are not followed. </p><p>There&#8217;s an arithmetic way to test this. It relies on the fact that every integer greater than 1 <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_arithmetic">is the product of a unique set of prime numbers</a>. For example, the prime factorization of 38 is 2 and 19, and the prime factorization of 8 is 2, 2, and 2.</p><p>Therefore, I could ask GPT to multiply a set of prime numbers, and then I could compute the prime factors of the number it returned, and check the difference. By way of example, suppose:</p><ul><li><p>My prompt is &#8220;multiply 2, 3, 5, 7, 7&#8221;</p></li><li><p>GPT returns 2646</p></li><li><p>The unique prime factorization of 2646 is 2, 3, 7, 7, 9</p></li><li><p>Therefore, while GPT multiplied the digits 2, 3, 7, 7, it omitted the 5 from the prompt and included a 9.</p></li></ul><p>However, the results were nothing like this. Even for a sequence of only 5 digits, any  overlap in prime factors seemed no more significant than chance. An example result below:</p><ul><li><p>Prompt: 37, 37, 47, 47, 127</p></li><li><p>True value: 384,063,367</p></li><li><p>GPT result: 1,213,787,489</p></li><li><p>GPT result prime factorization: 29, 41854741</p></li></ul><p>Or:</p><ul><li><p>Prompt: 11, 13, 19, 43, 47</p></li><li><p>True value: 5,491,057</p></li><li><p>GPT result: 203,679,161</p></li><li><p>GPT result prime factorization: 7, 4973, 5851</p></li></ul><p>I&#8217;ll spare you the many other examples. Unlike the results in addition, when it comes to multiplication, the results are <em>wildly</em> off. There were no close approximations to be seen in the results, and I couldn&#8217;t find any relationship between the prime factors I was using as input and the ones I was getting as output. After looking into it more, I found a paper extending this topic, <a href="https://arxiv.org/pdf/2311.14737v1.pdf">Positional Description Matters for Transformers Arithmetic</a>, which you may find interesting.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For clarification, what I mean by &#8220;symbolic reasoning&#8221; or &#8220;symbolic inference&#8221; is reasoning according to strict rules that are represented by <em>symbols</em>. The most classic example of symbolic reasoning is in solving mathematical equations. <br><br>To me, this is a contrast to inference in language, which is very fuzzy: if I start a sentence with &#8220;I am going to the store today,&#8221; then there are many ways to complete the sentence that would be acceptable, and an LLM will do a great job at generating them. However, if you have a sentence like &#8220;94 + 281 * 289 =&#8221; then there is only one precise way to complete it (in simplified form). </p><p>The notion that GPT can use its language inference capabilities to do symbolic reasoning is something I address toward the end of the piece, in the section &#8220;A Dumb Turing Machine&#8221;.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>For example, &#8220;2 + 6 + 9  + 87 + 57&#8221; is a sequence of length 5.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>If you wanted to test this formally, you could follow the approach of <a href="https://arxiv.org/pdf/2402.03744.pdf">certain academic papers</a>, run your own LLM, and then observe which neurons are activated when processing a given token.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Another potential implication is that the sum-syntax might work better because it&#8217;s just fewer tokens/symbols overall? </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>For example, <a href="https://loeber.substack.com/p/15-maybe-you-should-invest-in-translation">translating</a> from Polish to French, or converting CSV to JSON.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>I say &#8220;the results of actual addition&#8221; because it is not clear that it is approximating addition as in the symbolic reasoning process. (By way of hyperbolic example, a lucky guess does not approximate the execution of a specific process; at best, it may approximate the results.)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>You might ask: what&#8217;s the <a href="https://en.wikipedia.org/wiki/P-value">P-Value</a> of this? Is this statistically significant? The P-Value is 0.003, so yes this is statistically significant at any reasonable threshold.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#15: Maybe You Should Invest in Translation]]></title><description><![CDATA[One of the traditional applications of artificial intelligence is in translating between languages. It&#8217;s so common now that it&#8217;s boring: Google Translate has been around forever. You can use your iPhone to take a picture of a street sign in a foreign language, get an instant translation, and that&#8217;s not a miracle, but just ordinary. I know lots of people excited about technology, but nobody who&#8217;s excited about translation as the]]></description><link>https://essays.johnloeber.com/p/15-maybe-you-should-invest-in-translation</link><guid isPermaLink="false">https://essays.johnloeber.com/p/15-maybe-you-should-invest-in-translation</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Fri, 19 Jan 2024 15:52:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/53b50557-8546-49a5-b4df-d0520a6e72c3_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the traditional applications of artificial intelligence is in translating between languages. It&#8217;s so common that it&#8217;s boring: Google Translate has been around forever. You can use your iPhone to take a picture of a street sign in a foreign language, get an instant translation, and that&#8217;s not a miracle, but ordinary. I know lots of people excited about technology, but nobody who&#8217;s excited about translation as the <em>next big thing</em> &#8212; like, don&#8217;t we already have it?</p><p>In this essay, I&#8217;m going to make exactly that case: <strong>translation is the next big thing</strong>. Language translation as a concept has been so normalized for so long that progress in it and related fields is hugely underrated, especially in its future economic impact. While we are early on in an artificial intelligence boom and there are many open questions as to how and where exactly it will play out, <strong>I believe that one of the highest-confidence AI bets you can make is on ubiquitous language translation</strong>, which will have a significant globalizing economic and social impact, perhaps on par with the invention of the smartphone. Let&#8217;s dig in. </p><h2>(1) The Tech</h2><p>While traditional, text-based language-to-language translation has gotten much better in recent years,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> the key is that several other technologies have become really good in parallel, and are coming together productively:</p><ul><li><p>Speech-to-text (i.e. automated transcription) </p></li><li><p>Text-to-speech (i.e. machine-generated voice)</p></li><li><p>Optical character recognition (OCR) </p></li><li><p>Audio and visual style transfer</p></li></ul><p>When you put them together, magic emerges. For example, you can get perfectly translated videos,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> even with all the lip movements adjusted to match the sound.</p><div id="youtube2-AACmqiiJJS4" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;AACmqiiJJS4&quot;,&quot;startTime&quot;:&quot;1s&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/AACmqiiJJS4?start=1s&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>Importantly, this is not some highly proprietary, secretive R&amp;D effort: it&#8217;s so simple and accessible that solo researchers can put together <a href="https://github.com/pranauv1/AI-Video-Translation">open-source versions</a> that get the job done. Part of the reason why these technologies will be so disruptive is that their building blocks are often easy-to-work-with pieces of open-source software. <strong>The technology is already here: we are on the cusp of it going mainstream.</strong></p><p>From dubbing videos, it&#8217;s only a short leap to doing it in real time: instantaneous language translation. Imagine walking around a foreign city with your Airpods in, and the chatter of another language becomes English in your ears. It&#8217;s been a long time coming: Bose and Google tried it <a href="https://www.cnet.com/tech/mobile/google-assistant-real-time-translation-comes-to-bose-quietcomfort-35-ii-pixel-buds/">back in 2018</a>, and <a href="https://www.timekettle.co/">Timekettle</a> seems to have gotten it to work today. </p><p>It&#8217;s a safe assumption that not too far in the future, you will have automatic, instantaneous translation always on. If you&#8217;re on a Zoom call with folks who don&#8217;t speak your language, then Zoom will translate it in real-time. If you&#8217;re walking around a city where you don&#8217;t speak the language, but you&#8217;re wearing a pair of inconspicuous smart glasses (Meta Ray-Bans below), they will overwrite the street signs in English. You could go on a date with someone who doesn&#8217;t speak your language &#8212; and so long as you&#8217;re both wearing translation earphones, it could go just fine.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MJWP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MJWP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 424w, https://substackcdn.com/image/fetch/$s_!MJWP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 848w, https://substackcdn.com/image/fetch/$s_!MJWP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 1272w, https://substackcdn.com/image/fetch/$s_!MJWP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MJWP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png" width="492" height="361.22802197802196" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1069,&quot;width&quot;:1456,&quot;resizeWidth&quot;:492,&quot;bytes&quot;:938213,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MJWP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 424w, https://substackcdn.com/image/fetch/$s_!MJWP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 848w, https://substackcdn.com/image/fetch/$s_!MJWP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 1272w, https://substackcdn.com/image/fetch/$s_!MJWP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3373762b-49f1-4c73-bd3d-37701507e23e_1730x1270.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In general, I believe that people will interact with the world around them more and more through digital interfaces. Many use cases have been cited for smart glasses and earphones &#8212; navigation, digital assistant, recording, etc. &#8212; but it seems clear that translation is at the top of the list, both in terms of usefulness and ease of implementation. <em>It is too easy and the benefits are too large for this not to become the case.</em></p><h2>(2) The Impact</h2><p>The key term for thinking about internet-age economy and society is <strong>innervation</strong>:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> to create a nervous system for a large organism. Historically, humanity was a large set of mostly-independent actors, and it took a very long time for information to spread from one group of humans to another. What the telegraph and radio started to do &#8212; and the internet kicked into high gear &#8212; is to give humanity a nervous system: a centralized, instantaneous way to transmit information across all people. </p><p>In this framing, ubiquitous translation is an extremely powerful agent of innervation. Most people do not speak one another&#8217;s languages, which means that the true extent of internet-enabled connectivity today is smaller than it seems. While it is true that 5.3 out of 8.1 billion people have been connected via the internet,<em> they congregate in language-specific groups</em>. For example, I don&#8217;t speak Russian, Arabic, or Japanese, so I&#8217;m mostly disconnected from those pockets of the internet. The below table shows how small these language groups are, relative to the size of the internet&#8217;s full population.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TY_J!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TY_J!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 424w, https://substackcdn.com/image/fetch/$s_!TY_J!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 848w, https://substackcdn.com/image/fetch/$s_!TY_J!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!TY_J!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TY_J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png" width="1456" height="954" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:954,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:326507,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TY_J!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 424w, https://substackcdn.com/image/fetch/$s_!TY_J!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 848w, https://substackcdn.com/image/fetch/$s_!TY_J!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!TY_J!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2617107d-b0ca-424b-b3ea-59823d48c25e_1930x1264.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>From the mid-90s through the early 2000s, the story of <a href="https://en.wikipedia.org/wiki/List_of_countries_by_number_of_Internet_users">internet adoption</a> was mostly about desktop users in developed nations getting broadband access, amounting to about two billion people. From about 2010 onward, the story of the next three billion people coming online was mostly about smartphones: the great agent of global innervation was the inexpensive smartphone with data, bringing the internet to mobile-first communities all over the developing world. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Y7dP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Y7dP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 424w, https://substackcdn.com/image/fetch/$s_!Y7dP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 848w, https://substackcdn.com/image/fetch/$s_!Y7dP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 1272w, https://substackcdn.com/image/fetch/$s_!Y7dP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Y7dP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png" width="576" height="371.86813186813185" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:940,&quot;width&quot;:1456,&quot;resizeWidth&quot;:576,&quot;bytes&quot;:356138,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Y7dP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 424w, https://substackcdn.com/image/fetch/$s_!Y7dP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 848w, https://substackcdn.com/image/fetch/$s_!Y7dP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 1272w, https://substackcdn.com/image/fetch/$s_!Y7dP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a85635d-7593-4cdd-a3d1-bea51fb3ed98_1906x1230.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It is in this sense that ubiquitous language translation could have an impact on par with the smartphone: not by bringing more folks online, but by <em>greatly</em> <em>increasing the <a href="https://en.wikipedia.org/wiki/Connectivity_(graph_theory)">connectivity</a> </em>of people on the internet. Simply put, given instant, always-on translation, everyone will be able to communicate better,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> with far more people than before. </p><p>Our cultural melting pot is about to become much larger, and the globalizing effects will be massive. There will be gains: for example, in exposure to a greater diversity of cultures and viewpoints, a higher degree of economic participation and exchange, and an easier ability to engage with one&#8217;s interests. There will be losses: for example, astroturfing and viral misinformation will be easier to spread, and language-specific internet subcultures will be lost as their users are amalgamated into larger communities.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> </p><p>But as far as I can see, <strong>the largest impact will be on the global labor market</strong>. We are still very early when it comes to remote work. The tools for it have become better, and pandemic-times have greatly increased willingness of employers to hire remote employees. But there are two very important considerations:</p><ol><li><p>Remote work is still hampered by language barriers. While there are lots of folks all over the globe who speak English at a professional level, far more do not. Additionally, even fluent speakers may still face <a href="https://www.theguardian.com/careers/accent-hinder-job-prospects">discrimination</a> against their accents. People are <a href="https://news.uchicago.edu/story/foreign-accents-make-speakers-seem-less-truthful-listeners-research-shows">biased</a> <a href="https://journals.sagepub.com/doi/full/10.1177/01979183211042004">against</a> accents, and this likely creates subtle reluctance about hiring remotely in general. Real-time translation can remove these barriers: not just by translating between languages, but also by localizing accents to the listener<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> &#8212; thereby removing subtle but powerful sources of workplace harassment and discrimination. </p></li><li><p>Implicitly, most of this discussion is about English-speaking firms hiring internationally. But that&#8217;s a biased perspective: most firms are not English-speaking. Suppose that you&#8217;re a Greek entrepreneur who speaks only Greek, looking to hire internationally: it will be difficult. It doesn&#8217;t help you if international applicants speak English, while you and your office don&#8217;t. <br><br>You may be thinking about always-on, real-time translation expanding the English-speaking internet, and US firms doing more international hiring. Those are true, but translation would provide a much bigger relative benefit &#8212; leveling the playing field &#8212; to those who are currently outside that ecosystem. Countries like Turkey, Brazil, and Bangladesh<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> are the biggest winners from this kind of development.</p></li></ol><p>In sum, the global labor market will become radically more efficient, and its current many national-linguistic subsectors will blend into one. The promise has been made for a long time, but at last, hiring someone five thousand miles away might become  literally indistinguishable from hiring someone fifty miles away. How supply and demand will play out is hard to predict, but I suspect that that citizens of developing nations will gain hugely in employment opportunities, while elite knowledge work positions in wealthy enclaves will go remote at lower cost.</p><h2>(3) Investing</h2><p>We are on the precipice of a large-scale set of changes that, as a matter of technology and economics, seems inevitable. It raises questions about capital allocation: how do you best bet on this? That&#8217;s not easy to answer. The challenges are:</p><ol><li><p>In predicting second-order consequences;</p></li><li><p>Greater efficiency does not always create opportunities to capture profits;</p></li><li><p>This is a long-term secular trend that may not take place overnight, and positioning <em>today</em> for a five-, ten-, or fifteen-year trend is not easy.</p></li></ol><p>Regardless, I&#8217;ll provide some thoughts below.</p><p><strong>Invest At the Fundamental Model Layer? </strong></p><p>Probably not for me. My first hunch is that future generations of LLMs will outperform state-of-the-art machine translation systems. My second hunch is that open-source LLMs will probably perform just as well as proprietary LLMs in this respect in the long run. While I see some fields where proprietary fundamental models may maintain some perpetual utility edge, I don&#8217;t think that&#8217;s the case here.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> </p><p><strong>At the Fine-Tuned Layer?</strong> </p><p>Translations are contextual! As I suggested in footnote 1, one of the great perks of the architectures at play is the ability to apply context and stylize the output: the desired style of translation will differ depending on whether the input is, for example, a contract, a patent, a sales email, a poem, or a novel. If I&#8217;m listening to a Spanish podcast translated into English, I might want it narrated in the voice of Brian Blessed; or if I&#8217;m reading shareholder letters from Japanese companies, I might like them written in the crisp prose of Steve Jobs. While there will certainly be general translation models, you may expect to see thousands of fine-tuned stylistic models. I am uncertain whether this yields a single venture-scale opportunity or an artisanal cottage industry.</p><p><strong>At the Interface Layer?</strong></p><p>To me, this is the more likely bet. I mentioned that I expect to see more and more digital interfaces that people use to navigate the world around them. While that includes things like smart glasses and earphones, it also includes regular desktop and mobile software. Imagine an application that is always on and translates any foreign words that pop up on your screen, before you even see them. Imagine an audio plugin that translates any language coming out of your speakers.</p><ul><li><p><em>Software</em>: there is a lot of diverse opportunity for these translation layers, and the field is fresh. Some competition is starting to crop up in the video translation space, but it&#8217;s still early. Due to the accessibility of the technology, there is some danger of commoditization, so startups seeking to sustain significant profit margins long-term will need to think about network effects and moats. Regardless, there certainly exists some opportunity.</p></li><li><p><em>Hardware</em>: this gets a little more interesting. The space is wide-open, and the user experience offered by these physical devices will be paramount. Surely it&#8217;s possible to make a nicer product than Timekettle. This may be a good arena for a talented product designer to build something <em>excellent </em>and establish a quality moat. It&#8217;s noteworthy that while Apple and Meta should be seen as formidable long-term competitors, Apple hasn&#8217;t really managed to integrate AI with its hardware productively. It is surprising that Siri is so far behind nowadays; perhaps this is an area of institutional weakness for Apple that can be attacked. </p></li></ul><p><strong>At the Geographic Level?</strong></p><p>Another approach is to focus on the downstream consequences of translation. The effects on the labor market, accelerating globalization, etc. lend themselves well to betting on labor marketplaces, high-skilled education targeted at geographies with strong labor/wage arbitrages,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> remote work enablement, collaboration software, and so on. The main question on these opportunities is whether they are more ripe for new entrants, or for incumbents with existing network effects.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!H2di!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!H2di!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 424w, https://substackcdn.com/image/fetch/$s_!H2di!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 848w, https://substackcdn.com/image/fetch/$s_!H2di!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 1272w, https://substackcdn.com/image/fetch/$s_!H2di!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!H2di!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png" width="508" height="361.8204081632653" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:698,&quot;width&quot;:980,&quot;resizeWidth&quot;:508,&quot;bytes&quot;:529264,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!H2di!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 424w, https://substackcdn.com/image/fetch/$s_!H2di!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 848w, https://substackcdn.com/image/fetch/$s_!H2di!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 1272w, https://substackcdn.com/image/fetch/$s_!H2di!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe99315f0-87af-42d9-bd5f-cdf51d626347_980x698.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Finally, while we have discussed labor markets at length, I also suggested earlier that the largest relative benefit might accrue to entrepreneurs who are currently outside the English-speaking internet ecosystem. There are many geographies that currently do not have globally significant entrepreneurship, but do have the talent and the regulatory environment for it. They just need the global language access. For example, look at how successful South Korea has been in exporting <em>culture</em> (film, television, music), punching far above its weight on a global scale: but not yet in software. As language barriers fade, this too shall come.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><em>Thanks to Evan and Gavin for their comments and feedback on this piece.</em></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>There&#8217;s some debate as to whether the current generation of Large Language Models are superior to state-of-the-art machine translators, such as <a href="https://www.deepl.com/en/translator">DeepL</a>. You can read some of the discussion about which is better in which context <a href="https://news.ycombinator.com/item?id=37501114">here</a> and <a href="https://davidabell.substack.com/p/playing-around-with-machine-translation">here</a>. To me, it seems clear that right now the field is divided: in some contexts, machine translators are better, and in other contexts, LLMs are better. I think there is good reason to believe that future, higher-parameter LLMs will be better than the current ones, and I expect they will eventually (stochastically) dominate traditional machine translators. <br><br>An additional important perk of LLMs, as opposed to machine translators, is that they can be supplied with additional instructions. For example, I might submit an English text to Google Translate, and ask for it to be translated into German. I&#8217;ll get a result. However, I could prompt an LLM with the same translation request, and<em> </em>ask it to translate in the style of, for example, the translator Michael Hofmann. Or I could request the translation to lean into the style of German novelists Stefan Zweig or Herman Hesse. The ability to perform not just translation, but <em>translation in a particular style,</em> or more generally translating with additional meta-context seems like a valuable point in favor of LLM translators. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>In the audio-visual terminology, these are <a href="https://en.wikipedia.org/wiki/Dubbing">dubbed</a> videos. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>All credit for this term and concept to a <a href="https://www.youtube.com/watch?v=yeCq8GgDyXM">2013 talk</a> from Steve Jurvetson. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Some of these forms of better communication are not immediately obvious. For example, <em>voice messages</em> are far more common in some cultures and language groups than others; I think that has something to do with the compatibility of that language with a digital keyboard. Again, the notion of a multi-modal, always-on digital interface that handles translation could greatly bring down accessibility barriers that are currently present everywhere for folks whose language doesn&#8217;t use a Latin alphabet. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Similar to how tens of thousands of niche online forum communities died as discussion migrated to mass platforms like Reddit and Facebook. It will be a brave new world for any internet anthropologist.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>For a related contemporary example, see <a href="https://twitter.com/aphysicist/status/1747868626948907325">this speech</a> by Javier Milei, translated by <a href="https://www.heygen.com/">HeyGen</a>. What&#8217;s noteworthy about this speech is that HeyGen not only translated Milei to English, but then <em>applied Milei&#8217;s accent back to the translation </em>for authenticity. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>For this example, I have picked countries that have <a href="https://en.wikipedia.org/wiki/List_of_countries_by_English-speaking_population">low rates of English proficiency</a> and are by far the globally primary speaker of their national language. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>I mean &#8220;here&#8221; in the general sense: not just language-to-language translation, but also speech-to-text, text-to-speech, speech style transfer, etc. To me, this whole class of applications looks like something where capabilities eventually max out for practical purposes, and open-source models will get there.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p><a href="https://andela.com/">Andela</a> comes to mind as a good example of the type of business that might strongly benefit from another wave of labor market globalization.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#14: Why Europe Fails to Create Wealth]]></title><description><![CDATA[This Friday, the European Union passed the AI Act, an EU-wide, landmark set of regulations on the usage of artificial intelligence. As you&#8217;d expect, it applies various levels of prohibition and permissioning for generative applications like ChatGPT. But it also applies to conventional tools, like facial recognition and, if you read it closely, even to things like plain-old linear regression. The impact is sweeping, and I suspect it will once more have a chilling effect on technology in Europe. This is dangerous for Europeans: in the long term, it may be yet another stone on a road to serfdom.]]></description><link>https://essays.johnloeber.com/p/14-why-europe-fails-to-create-wealth</link><guid isPermaLink="false">https://essays.johnloeber.com/p/14-why-europe-fails-to-create-wealth</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Mon, 11 Dec 2023 20:30:12 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ed688195-394c-43ee-8381-9d598c4ef752_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last week, the European Union passed the <a href="https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/">AI Act</a>, an EU-wide, landmark set of regulations on the usage of artificial intelligence. As you&#8217;d expect, it applies various levels of prohibition and permissioning for generative applications like ChatGPT. But it also applies to conventional tools, like facial recognition and, if you read it closely, even to things like plain-old linear regression. The impact is sweeping, and I suspect it will once more have a chilling effect on technology in Europe. This is dangerous for Europeans: in the long term, it may be yet another stone on a road to serfdom.</p><p>I grew up in Europe (mostly Germany, Denmark, Switzerland). I had never even set foot outside the continent until I was 18, when I moved to the United States. I have lived here for 12 years now, with most of that in San Francisco. This gives me some perspective on the disconnect between European attitudes and the creation of  prosperity by technological advancement. In this essay, I will cover:</p><ol><li><p>Why the AI Act is bad;</p></li><li><p><em>Providerism</em> as the reason why Europe fails to build technology and create wealth;</p></li><li><p>What this means for Europe;</p></li><li><p>How to fix it.</p></li></ol><h2>(1) Why the AI Act is Bad</h2><p>If you read a <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence">summary</a> of the <a href="https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf">AI Act Briefing</a>, one thing will stand out to you immediately: it&#8217;s all regulated at the application-level. The flavor of regulation is stuff like &#8220;you&#8217;re not allowed to use AI to exploit vulnerable groups,&#8221; &#8220;if you&#8217;re using AI for educational purposes, it has to pass this set of EU tests&#8221;, &#8220;you can&#8217;t use facial recognition for these forbidden purposes.&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> This style of regulating-by-patchwork-of-many-distinct-but-related-rules is infamously ineffective:</p><ul><li><p>Because it lacks unifying/general principles, new regulations will need to be added for every new future case.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> There&#8217;s a new application? Let&#8217;s spend a thousand hours of lawmaker time on adding one more rule set.</p></li><li><p>The specificity of the rules ironically makes it unclear whether they apply or not, which invites adversarial lawyering: &#8220;<em>is my application really using biometric data? No, it&#8217;s metadata&#8230;</em>&#8221; or &#8220;<em>does my application do forbidden social credit scoring? No, we&#8217;re really selling an actuarial risk assessment and you choose how to use it&#8230;</em>&#8221;</p></li><li><p>On the other hand, the AI Act regulates techniques so basic that most technology companies will be technically out of compliance in some way from day one.</p></li></ul><p>This is regulation at its worst: <em>broad, sweeping, everyone&#8217;s technically out of compliance in some way, but you can pay to fight it</em>. Obedient, rule-following entrepreneurs will waste all their time and money trying to comply with every paragraph of the AI Act. More <a href="https://en.wikipedia.org/wiki/Realpolitik">Realpolitik</a>-minded entrepreneurs will position their business such that it&#8217;s in a gray area, assume that <em>if they&#8217;re successful</em>, they&#8217;ll get sued one day, and save some funds for that eventuality. </p><p>And if they get sued, what are the penalties? Up to 7% of global annual turnover. This is a farce. If the EU&#8217;s contention is that this technology is so dangerous that it requires EU-wide regulation, then the penalties should actually be a lot higher. If there&#8217;s a bad actor running a massively abusive AI business, and the biggest threat they face is a 7% revenue penalty that they can probably knock down to 2% after a few years of litigation &#8212; then that&#8217;s no deterrent at all! These businesses run at 75% gross margins. You have one year that runs at 73%? Doesn&#8217;t matter.</p><p>This puts the AI Act in the dismal middle ground of regulation: annoying enough to dissuade legitimate entrepreneurs, toothless enough to not prevent large-scale abuse. I am shocked that the AI Act does not<em> </em>include any capability for <em>banning</em> something that would actually be dangerous, like a nation-state-funded, AI-optimized propaganda machine masquerading as a social network. If they can&#8217;t ban products, then it&#8217;s not consumer protection: it&#8217;s just wealth extraction.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><h2>(2) Providerism</h2><p>It is extremely telling that the AI Act&#8217;s regulations are all the application-level. The AI Act is drafted from the perspective of a <em>consumer</em> rather than a <em>producer</em>. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p7f0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p7f0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 424w, https://substackcdn.com/image/fetch/$s_!p7f0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 848w, https://substackcdn.com/image/fetch/$s_!p7f0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 1272w, https://substackcdn.com/image/fetch/$s_!p7f0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p7f0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png" width="426" height="705.338065661047" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:1866,&quot;width&quot;:1127,&quot;resizeWidth&quot;:426,&quot;bytes&quot;:76044,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p7f0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 424w, https://substackcdn.com/image/fetch/$s_!p7f0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 848w, https://substackcdn.com/image/fetch/$s_!p7f0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 1272w, https://substackcdn.com/image/fetch/$s_!p7f0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F55b382c2-dc68-4183-917c-1eebd1db3f2e_1127x1866.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I do not endorse regulating AI technologies at this point in time. But for the sake of argument, if you wanted to regulate AI, I think you&#8217;d want to regulate somewhere at the <em>production</em> level, not at the <em>consumption</em> level.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> Why is it that the EU regulators are focusing entirely on the <em>consumption</em> level? </p><p>Well, because they are consumers. Europe is the continent of consumption. This is deeply ironic, because Europeans will thumb their noses at America and call it a consumerist society: runaway fast food obesity, endless billboard advertising, hapless folks drowning in credit card debt. But while America may be consumerist at the micro-level, it is highly productive at the macro-level: the US makes tons of great stuff. From medicine to fundamental scientific research to technology to space travel, we&#8217;re leading the charge. European individuals may not be consumerists, but <em>Europe is a macro-consumer</em>: virtually everything of value comes from elsewhere.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><p>I didn&#8217;t really get this until I moved to San Francisco. <strong>I had never in my life met people who make stuff.</strong> In Europe, my parents worked for non-profits. The parents of my friends were mostly middle managers, financiers, or professional service providers. Living in Silicon Valley is profoundly different, because the people you meet are working on <em>building things that you use</em>. It is hard to articulate just how colossal that difference in exposure is. In Europe, I used computers all day &#8212; but never gave any mind to where computers actually come from: you buy them at the store and that glosses over the abstraction. It feels like I was sleepwalking through economic life. </p><p>I call this <strong>Providerism</strong>: the ability to ignore political-economic reality because everything is provided for you, and the underlying mechanics and costs are abstracted away. Europeans may not be consumerists, but they are hardcore providerists. Growing up, virtually every consumer good I interacted with was made in Japan or made in China, and in 18 years that never gave rise to more than 15 minutes of conversation. The goods that you want appear from far away in your local store: they are provided. And if you fall on hard times and cannot afford these goods? The state will provide the basics. If you&#8217;re 22 and don&#8217;t know what to do, the state will provide more time and will provide a master&#8217;s degree. Or two. And if there is war, the state will call NATO, and NATO will provide defense. </p><p>Just before the final draft of the AI Act, Thierry Breton of the European Commission <a href="https://x.com/ThierryBreton/status/1733235664915706220">posted</a> a picture of the team hard at work:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vVR0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vVR0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vVR0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vVR0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vVR0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vVR0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg" width="396" height="527.9093406593406" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:396,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;Image&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Image" title="Image" srcset="https://substackcdn.com/image/fetch/$s_!vVR0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vVR0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vVR0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vVR0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1188a1bc-e3ea-40bd-8591-37f015082ed0_1536x2048.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>How fitting: the European Commission assembled around an iPad. On the back, in fine print, it will read: <em>Designed by Apple in California. Assembled in China.</em> There is no European iPad. There is no European computer. There is no European search engine. There are only European consumers, to whom things are quasi-magically provided, and so they regulate the providing and consumption of those things. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dXt2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dXt2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 424w, https://substackcdn.com/image/fetch/$s_!dXt2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 848w, https://substackcdn.com/image/fetch/$s_!dXt2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 1272w, https://substackcdn.com/image/fetch/$s_!dXt2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dXt2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png" width="560" height="485.85915492957747" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1232,&quot;width&quot;:1420,&quot;resizeWidth&quot;:560,&quot;bytes&quot;:152619,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dXt2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 424w, https://substackcdn.com/image/fetch/$s_!dXt2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 848w, https://substackcdn.com/image/fetch/$s_!dXt2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 1272w, https://substackcdn.com/image/fetch/$s_!dXt2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5fd4430-1043-45d3-9ceb-a1d8ee071190_1420x1232.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption"><a href="https://www.ft.com/content/e0177eb7-8d17-48aa-a6ad-fccd0655f557">Source: FT</a></figcaption></figure></div><p>The European Union is so deep in Providerism that it does not recognize how far removed it is from the <em>production</em> of things of value. <strong>This myopia is a great peril for the citizens of Europe.</strong> Every year that passes, Europe slips deeper into complacency as goods and services are provided from abroad, while regulators are writing missives and assessing fines in impotent, play-acting gestures of agency. This encumbers European technological entrepreneurship by weakening domestic entrepreneurial network effects<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> and setting ever higher local barriers to entry. This is terrible, because for Europe, the main way to maintain and achieve long-term prosperity is <strong>obviously</strong> to innovate technologically and produce things of value.</p><h2>(3) The Writing on the Wall</h2><p>Europe is falling behind. It largely missed the internet and personal computing booms, and now it sits in danger of missing the coming AI boom. Today&#8217;s Europeans are not yet poor &#8212; they are still living off the prosperity created by prior generations, and that enables their passive consumption<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> &#8212; but tomorrow&#8217;s Europeans may be. </p><p>Further, European regulator gamesmanship can&#8217;t go on forever. Foreign firms don&#8217;t just have to comply with the AI Act, but also with GDPR and many other EU standards. At some point it&#8217;s all just too cumbersome and foreign firms will play hardball.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> In that situation, the EU will lose because they mostly don&#8217;t have domestic alternatives to major foreign software products, and EU consumers<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> are not going to be willing to go back to the stone age.</p><p>This is a poor setup, both for European consumers and for the prosperity of Europe. The EU needs to course-correct swiftly and decisively.</p><h2>(4) How to Fix It</h2><p>Europe needs to escape Providerism, discourage complacent reliance on outside goods and services, and encourage the virtue inherent in making new things. This will require tremendous leadership. The task ahead is no small feat, and it will be unpopular. People like having things provided to them, but this state of affairs cannot last. From a public-budgets perspective, you already see Providerism breaking all over the EU. The great task ahead for European Regulators is to facilitate <strong>wealth creation</strong>, not wealth consumption. My recommendations:</p><ol><li><p><strong>Deregulate Technology.</strong> Throw it all out and start over. In the future, regulate big negative externalities, not imaginary potential ones. Do not presume need.</p></li><li><p><strong>Federate. </strong>The EU needs to make it easy for businesses to expand across the entire EU market. Right, now there are many legal and financial barriers to doing so. </p></li><li><p><strong>Foster a Production Mindset.</strong> It is important for Europe to repatriate some  production of cutting-edge goods and services: not even for economic reasons, but just culturally. Focus on making things. </p></li><li><p><strong>Lean into the AI Boom.</strong> The AI boom is a unique opportunity for Europe because it is tied to academic research, and the EU&#8217;s many universities are graduating great talent with no debt. <a href="https://mistral.ai/">Mistral</a> is an amazing company: there should be more!</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>My favorite clause is on the prohibition of &#8220;AI systems that manipulate human behavior to circumvent their free will.&#8221; What a premise!</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>If you ever wonder how we got legal systems with thousands of byzantine and obstructive  laws left over from the 1800s: this is how. One IF-statement at a time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>We see this extractionist mindset all the time: European politicians always drum on about supposed privacy invasions from Google and Facebook, then fine them for some totally unimpactful amount of money, and then they pipe down for six months before starting over again. If those politicians had real grievances, they would try to ban those products, or build local alternatives. They do neither. They&#8217;re just selectively enforcing regulation to extract what I view as a bribe to operate.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>The argument for this is tangential to the main point, so I&#8217;ve relegated it to the footnotes:</p><ul><li><p>Regulating at the consumption level doesn&#8217;t work for all the reasons from the first section. You wind up with a crazy patchwork of rules, and everything is maybe covered, maybe not. It&#8217;s so impractical that you could call it a jobs-creation-program for regulatory litigators. </p></li><li><p>The reason for regulation is because lawmakers are afraid of the <em>sophistication</em> of these models. That&#8217;s why the AI Act got drafted now and not 10 or 20 years ago. Machine learning models have existed for decades, but the results are just much better today. </p></li><li><p>Sophistication is mostly synonymous with scale. If you want to regulate sophisticated models, then you pick some scale thresholds: for example, you say that models trained with more than 10^26 petaflops or on more than 50 million datapoints are subject to registration/inspection. (From there, you can check that the model doesn&#8217;t violate existing law, e.g. anti-discrimination statutes.) That&#8217;s much simpler, cleaner, and less ambiguous than the current edge-case-circus of the AI Act.</p><ul><li><p>The point on not violating existing law is significant: in many respects, AI does not create <em>new</em> opportunities for malfeasance, but just scales up existing ones. Those existing ones should already be covered by existing law! The set of truly <em>new </em>scenarios for regulation to address seems quite small to me. </p></li></ul></li></ul></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>You might disagree and point at European cars, German steelworks, British petroleum, and so forth. But these are the colossuses of yesteryear. They are fundamentally not dealing in new technologies. Most of them are in decline, losing market share, and we will see them disappear. More precisely, we will see them purchased and depleted by foreign private equity firms. This is already ongoing.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Here I&#8217;m just trying to make a basic protectionist point: the more a nation relies on imports and does not manufacture goods domestically, the harder it is for the nation to build up domestic manufacturing. <em>Producing things</em> has a geographic network effect to it. It&#8217;s much easier to get started when there are other folks around who are also producing things. New arrivals benefit from collective infrastructure and expertise. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>I am not the first person to make this point. Writing this, I was reminded of <a href="https://www.economist.com/europe/2022/02/26/europe-is-the-free-rider-continent">Europe is the Free-Rider Continent</a> from The Economist.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>In practice, this might mean:</p><ol><li><p>Blocking parts of their service for European customers, the way Mark Zuckerberg <a href="https://www.reuters.com/technology/why-are-facebook-instagram-ending-news-access-canada-2023-06-26/">did in Canada</a>. Threads rolling out in the US 6 months before in the EU is another example.</p></li><li><p>Ignoring EU regulation entirely</p></li><li><p>Officially stop servicing European customers, who will then have to connect via a US VPN, similar to how they might evade Netflix&#8217;s country blocks. </p></li></ol><p>It would be funny if over-regulation of consumed goods puts Europe into a situation where they can no longer regulate (or consume) those goods. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Curiously, I don&#8217;t hear very much about the actual stated or revealed preferences of European consumers in the first place. My hunch is that the consumers do not actually make much use of the various consumer protections that the EU provides them: I click &#8220;accept cookies&#8221; on every cookie popup because the alternative flow (clicking decline and then various other buttons every time I load a webpage) is just too impractical. I cannot spend my life clicking through popups.</p></div></div>]]></content:encoded></item><item><title><![CDATA[#13: A Timeline of the OpenAI Board]]></title><description><![CDATA[Yesterday, Sam Altman and Greg Brockman were fired from the Board of Directors of OpenAI.]]></description><link>https://essays.johnloeber.com/p/a-timeline-of-the-openai-board</link><guid isPermaLink="false">https://essays.johnloeber.com/p/a-timeline-of-the-openai-board</guid><dc:creator><![CDATA[John Loeber]]></dc:creator><pubDate>Sat, 18 Nov 2023 21:21:30 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e82ad4d4-256c-4b8a-a031-71643922f066_1792x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Yesterday, Sam Altman and Greg Brockman were fired from the Board of Directors of OpenAI. Following, all of Tech Twitter was abuzz with one question: wait a moment, who was on the Board? And after they found out, they asked: who on earth are <a href="https://www.google.com/search?q=tasha+mccauley">Tasha McCauley</a> and <a href="https://cset.georgetown.edu/staff/helen-toner/">Helen Toner</a>? It turns out that OpenAI&#8217;s Board had undergone numerous changes over the years, especially recently. And that just wasn&#8217;t ever the biggest news about OpenAI, so those changes didn&#8217;t spark the concerns that maybe they should have. </p><p>I combed through the Internet Archive and OpenAI&#8217;s non-profit filings to try to make sense of OpenAI&#8217;s governance. Below, I have attempted to chronicle the composition of OpenAI&#8217;s Board over time, point out the conflicts, and you can see how we got to the earthquake yesterday. You can <a href="https://loeber.substack.com/i/138968534/summaryperspectives">skip to the end</a> for my summary perspective.</p><h2><strong>December 11, 2015</strong></h2><p>OpenAI is founded.</p><p>Board Directors:<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><ul><li><p>Elon Musk (Co-Chair)</p></li><li><p>Sam Altman (Co-Chair)</p></li></ul><h2><strong>December 31, 2016</strong></h2><p>OpenAI&#8217;s <a href="https://projects.propublica.org/nonprofits/organizations/810861541/201703459349300445/full">Form 990 public filings</a> for calendar year 2016 show the Board Directors:</p><ul><li><p>Elon Musk</p></li><li><p>Sam Altman</p></li><li><p>Chris Clark</p></li><li><p>Jonathan Levy (?)</p></li></ul><p>Chris was the initial COO of OpenAI, and still works there to this day. Jonathan Levy was listed as Secretary/Treasurer, and may have been a trustee rather than a Director. It&#8217;s unclear from the filings.</p><h2><strong>March 2017</strong></h2><p>Open Philanthropy <a href="https://www.goodventures.org/our-portfolio/grants/openai-general-support/">donates</a> $30M to OpenAI. Holden Karnofsky, the founder of Open Philanthropy, <a href="https://www.openphilanthropy.org/grants/openai-general-support/#5-relationship-disclosures">joins</a> OpenAI&#8217;s Board of Directors.</p><h2><strong>December 31, 2017</strong></h2><p>OpenAI&#8217;s <a href="https://projects.propublica.org/nonprofits/organizations/810861541/201920719349300822/full">Form 990 public filings</a> for calendar year 2017 show the Board Directors: </p><ul><li><p>Elon Musk</p></li><li><p>Sam Altman</p></li><li><p>Chris Clark</p></li><li><p>Holden Karnofsky</p></li><li><p>Greg Brockman</p></li><li><p>Ilya Sutskever</p></li></ul><h2><strong>February 20, 2018</strong></h2><p>Elon Musk is removed from the Board. The <a href="https://openai.com/blog/openai-supporters">official press release</a> proclaims a departure to avoid potential conflicts, but <a href="https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai">journalists report</a> leadership disagreements culminating in Elon proposing a takeover and being rebuked.</p><p>Board Directors:</p><ul><li><p>Greg Brockman</p></li><li><p>Ilya Sutskever</p></li><li><p>Holden Karnofsky</p></li><li><p>Sam Altman</p></li></ul><p>We don&#8217;t know exactly when Chris Clark was removed from the Board. </p><h2><strong>March 2018</strong></h2><p>Reid Hoffman, founder of LinkedIn and General Partner at Greylock, joins the Board. I couldn&#8217;t find a press release or official announcement, but Reid&#8217;s <a href="https://www.linkedin.com/in/reidhoffman/details/experience/">LinkedIn profile</a> has the dates.</p><h2><strong>April 24, 2018</strong></h2><p>Adam D&#8217;Angelo, CEO of Quora and former Facebook CTO, <a href="https://twitter.com/adamdangelo/status/988859015315701760?lang=en">joins</a> the Board. This follows the February Board changes, where the OpenAI blog post had noted the intent to add another Director to the Board soon.</p><h2><strong>September 2018</strong></h2><p>Sue Yoon joins the Board. Sue&#8217;s exact employment at the time was unclear &#8212; she was previously an EIR at First Round, and in the coming months would lead robotics projects at Google. Similar to Reid Hoffman, I couldn&#8217;t find an official announcement, but her <a href="https://www.linkedin.com/in/sue-yoon-8b35a214/">LinkedIn profile</a> has the dates.</p><h2><strong>December 31, 2018</strong></h2><p>OpenAI&#8217;s <a href="https://projects.propublica.org/nonprofits/organizations/810861541/201943199349318399/full">Form 990 public filings</a> for calendar year 2018 list the Board Directors: </p><ul><li><p>Sam Altman</p></li><li><p>Sue Yoon</p></li><li><p>Holden Karnofsky</p></li><li><p>Greg Brockman</p></li><li><p>Ilya Sutskever</p></li><li><p>Adam D&#8217;Angelo</p></li><li><p>Tasha McCauley</p></li></ul><p>I could not find anything in the way of a source on when, or under what circumstances, Tasha McCauley joined the Board. </p><h2><strong>March 11, 2019</strong></h2><p>This gets strange. There&#8217;s an OpenAI <a href="https://openai.com/blog/openai-lp">blog post listing the Board Directors</a>:</p><ul><li><p>Greg Brockman</p></li><li><p>Ilya Sutskever </p></li><li><p>Sam Altman</p></li><li><p>Adam D&#8217;Angelo</p></li><li><p>Holden Karnofsky</p></li><li><p>Reid Hoffman</p></li><li><p>Shivon Zilis</p></li><li><p>Tasha&nbsp;McCauley</p></li></ul><p>Note the unannounced elevation of Shivon Zilis (previously an advisor) and the unannounced departure of Sue Yoon. Weirder yet, OpenAI published its <a href="https://web.archive.org/web/20190311213355/https://openai.com/about/">new homepage</a> just that day, still listing Sue Yoon as a Board Director, and not Shivon Zilis.</p><h2><strong>November 2019</strong></h2><p>Sue Yoon leaves OpenAI&#8217;s Board, according to her LinkedIn. The OpenAI Website <a href="https://web.archive.org/web/20191201065651/https://openai.com/about/">still lists</a> her (and not Shivon Zilis) as a Board Director. </p><h2><strong>December 31, 2019</strong></h2><p>Another year, another OpenAI <a href="https://projects.propublica.org/nonprofits/organizations/810861541/202003219349325305/full">Form 990 public filing</a>, listing the Board Directors:</p><ul><li><p>Ilya Sutskever</p></li><li><p>Greg Brockman</p></li><li><p>Sam Altman</p></li><li><p>Reid Hoffman</p></li><li><p>Sue Yoon</p></li><li><p>Holden Karnofsky</p></li><li><p>Adam D&#8217;Angelo</p></li><li><p>Tasha McCauley</p></li></ul><p>Note that Shivon Zilis still doesn&#8217;t appear in the list of Board Directors. Was the March 11, 2019 blog post just wrong? Did someone in marketing make a mistake and no-one caught it? </p><h2><strong>December 31, 2020</strong></h2><p>Slow news year. The Form 990 public filing for calendar year 2020 lists the Board of Directors, finally including Shivon Zilis and not Sue Yoon:</p><ul><li><p>Ilya Sutskever</p></li><li><p>Greg Brockman</p></li><li><p>Sam Altman</p></li><li><p>Reid Hoffman</p></li><li><p>Shivon Zilis</p></li><li><p>Holden Karnofsky</p></li><li><p>Adam D&#8217;Angelo</p></li><li><p>Tasha McCauley</p></li></ul><p>I couldn&#8217;t find a public statement on when Shivon actually joined the Board, other than the March 2019 blog post that may have been in error. </p><h2><strong>May 3, 2021</strong></h2><p>Will Hurd, Republican member of the House of Representatives, and former CIA agent, <a href="https://openai.com/blog/will-hurd-joins">joins</a> the Board.</p><h2><strong>September 8, 2021</strong></h2><p>Helen Toner, Director at Georgetown&#8217;s Center for Security and Emerging Technologies, and formerly of Holden Karnofsky&#8217;s Open Philanthropy, <a href="https://openai.com/blog/helen-toner-joins">joins</a> the Board. </p><h2><strong>Fall 2021</strong></h2><p>Holden Karnofsky resigns from the Board, <a href="https://www.vox.com/future-perfect/2023/3/18/23645013/openai-gpt4-holden-karnofsky-artificial-intelligence-ai-safety-existential-risk">citing</a> a potential conflict because his wife, Daniela Amodei, is helping start Anthropic, a major OpenAI competitor, with her brother Dario Amodei. (They all live(d) together.) The exact date of Holden&#8217;s resignation is unknown; there was no contemporaneous press release.</p><p>Between October and November 2021, Holden was quietly removed from the list of Board Directors on the OpenAI website, and Helen was added (<a href="https://forum.effectivealtruism.org/posts/fmDFytmxwX9qBgcaX/why-aren-t-you-freaking-out-about-openai-at-what-point-would?commentId=KavuL7Q5qdvxoYSsd">Discussion Source</a>). Given their connection via Open Philanthropy and the fact that Holden&#8217;s Board seat appeared to be permanent, it seems that Helen was picked by Holden to take his seat. </p><h2><strong>December 31, 2021</strong></h2><p>OpenAI&#8217;s <a href="https://projects.propublica.org/nonprofits/organizations/810861541/202243199349314989/full">Form 990 public filings</a> list the Board Directors of the 2021 calendar year: </p><ul><li><p>Ilya Sutskever</p></li><li><p>Shivon Zilis</p></li><li><p>Greg Brockman</p></li><li><p>Will Hurd</p></li><li><p>Sam Altman</p></li><li><p>Reid Hoffman</p></li><li><p>Holden Karnofsky</p></li><li><p>Adam D&#8217;Angelo</p></li><li><p>Tasha McCauley</p></li><li><p>Helen Toner</p></li></ul><p>The fact that both Holden and Helen are listed here is not surprising; both of them were Board Directors at points in 2021. (It does not necessarily imply that they were both on the Board at the same time.)</p><h2><strong>2022?</strong> </h2><p>There did not appear to be any Board events in 2022. The Form 990 does not appear to have been filed as of the time of writing.</p><h2><strong>January 2023</strong></h2><p>Reid Hoffman steps down from the Board, <a href="https://www.bloomberg.com/news/articles/2023-03-03/linkedin-co-founder-hoffman-stepping-down-from-openai-board">citing</a> the need to avoid potential conflicts with his investments. While this was reported in March 2023, according to his LinkedIn profile&#8217;s dates it happened in January.</p><h2><strong>March 23, 2023</strong></h2><p>Shivon Zilis <a href="https://www.theinformation.com/articles/shivon-zilis-musk-associate-leaves-openai-board">resigns</a> from the Board for reasons unknown. (Commentators speculate that her resignation is over conflicts due to her bearing Elon Musk&#8217;s children, but that is ultimately just speculation.)</p><h2><strong>July 13, 2023</strong></h2><p>Will Hurd <a href="https://www.bloomberg.com/news/articles/2023-07-13/republican-presidential-hopeful-will-hurd-leaves-board-of-openai">resigns</a> from the Board, citing the need to focus on politics/his 2024 Presidential campaign. (Three months later, in October, he drops out of the race. I don&#8217;t know what to make of that.)</p><h2><strong>November 17, 2023</strong></h2><p>Sam Altman is fired from OpenAI and the OpenAI Board in a surprise meeting of the Board (except Greg). Minutes later, in a <a href="https://twitter.com/gdb/status/1725736242137182594">separate surprise Board meeting</a>, Greg Brockman is removed from the Board (and as Board Chairman).</p><p>Board Directors:</p><ul><li><p>Adam D&#8217;Angelo</p></li><li><p>Helen Toner</p></li><li><p>Tasha McCauley</p></li><li><p>Ilya Sutskever</p></li></ul><h2><strong>Summary/Perspectives</strong></h2><p>The first thing that sticks out to me is that there have been, for several quarters, two significant conflicts of interest on the Board:</p><ul><li><p>Adam D&#8217;Angelo founded and appears to be spending all his time on developing <a href="https://twitter.com/poe_platform">Poe</a>, an AI chat platform partially leveraging and partially competing with OpenAI. In my opinion, that&#8217;s too close. Reid Hoffman resigned over potential indirect investment conflicts; Adam&#8217;s conflicts are more direct. Best practice would&#8217;ve been for Adam to resign when he began working on Poe.</p></li><li><p>Helen Toner and and Tasha McCauley are jointly participating in a highly ideological AI governance organization. As Alex Konrad <a href="https://www.forbes.com/sites/alexkonrad/2023/11/17/these-are-the-people-that-fired-openai-ceo-sam-altman/?sh=47da17654ae9">noted</a>: &#8220;McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI (GovAI) alongside fellow OpenAI director Helen Toner.&#8221; It turns out that the <a href="https://www.governance.ai/people">advisory board is six people</a>, and beyond Helen and Tasha, the other four include: one who currently works for Open Philanthropy, and another is the founder of GovAI, which was mostly funded by&#8230; Open Philanthropy. </p><ul><li><p>For OpenAI&#8217;s six-person Board, it was inappropriate for two Board Directors to be this strongly associated with an ideological organization and therefore so strongly and predictably aligned in their voting. It calls into question the independence of their votes.</p></li><li><p>Due to Open Philanthropy&#8217;s link to major OpenAI competitor Anthropic, there&#8217;s also a hint of corporate conflict here. If I were on OpenAI&#8217;s Board, I would have requested at least for Tasha<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> to relinquish her seat for a true independent Director.</p></li></ul></li></ul><p>Secondly, what the hell happened in Q1/Q2 2023? </p><ul><li><p>Reid, Shivon, and Will all resigned, and the Board did not line up replacement Directors? By comparison, when Elon resigned in February 2018, Adam joined two months later. </p><ul><li><p>Were these seats just left vacant, with a deadlocked Board unable to agree on new Directors to appoint?</p></li></ul></li><li><p>They all resigned within a few months of one another despite OpenAI looking like the rocketship of the century? Something feels a little odd about that.</p></li></ul><p>It seems less likely that the November firings would have happened if Reid, Shivon, and Will &#8212; or even just one of them! &#8212; had still been on the Board, or replaced with an appropriate representative. With this view, the outcome was almost predictable given these two facts:</p><ol><li><p>The thinning-out of the Board from 9 to 6 members;</p></li><li><p>Half of those 6 members carrying conflicts in their relationship with OpenAI!</p></li></ol><p>We will find out, in due time, the motivations of the Board in the November firings. Right now they aren&#8217;t clear. It isn&#8217;t known whether anyone acted inappropriately, and I am not accusing anyone (to be clear, even the Board Directors that I consider conflicted) of having acted subject to conflicts of interest. But the 2023 changes made drama likely, no matter what. A Board is a delicate balance of perspectives and interests. When a Board rapidly changes in size, rarely is the remainder left well-balanced. Potential conflicts only make the balancing act harder.</p><h2>Final Thought</h2><p>Governance can be messy. Time will be the judge of whether this act of governance was wise or not. But you should note that the people involved in this act of corporate governance are roughly the same people trying to position themselves to govern policy on artificial intelligence. </p><p>It seems much easier to govern a single-digit number of highly capable people than to &#8220;govern&#8221; artificial superintelligence. If it turns out that this act of governance was unwise, then it calls into serious question the ability of these people and their organizations (Georgetown&#8217;s CSET, Open Philanthropy, etc.) to conduct governance in general, <em>especially</em> of the most impactful technology of the hundred years to come. Many people are saying we need more governance: maybe it turns out we need less.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://essays.johnloeber.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Loeber on Substack! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Source is this Vanity Fair interview with Sam Altman, where he notes that the only Directors are himself and Elon: https://www.vanityfair.com/news/2015/12/sam-altman-elon-musk-openai </p><p>The official blog post confirms both Elon and Sam as co-chairs: https://openai.com/blog/introducing-openai</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Ideally I would have requested it of both Helen and Tasha, but it seems that Helen&#8217;s seat was bought-and-paid-for, so that might not have gotten a lot of traction. However, it appears that Tasha was actually meant to be an <em>independent </em>Director!</p></div></div>]]></content:encoded></item></channel></rss>