As I pointed out in the Dercuano introductory text, Dercuano contains much that is correct and original, but mostly what is original is not correct, and what is correct is not original. I think that phrase originated as a clever insult to somebody’s poor work, but in a sense it’s just the default state of human cognition: most of the new ideas we come up with are wrong, while most of our ideas are not new, and since correct ideas have better memetic fitness (all else being equal) our unoriginal ideas tend toward correctness. With enough focused effort it’s possible to figure out which original ideas are true, and if I were capable, I would have made that effort before making Dercuano public, but I haven’t managed it in many years.
On the other hand, there’s a third axis along which ideas can be evaluated, aside from (probable) correctness and originality: consequences or interestingness.
In Approaches to 3-D printing in sandstone, for example, it says that in Argentina in 2017, ordinary gray portland cement cost US$0.26 per kg, while the white grade cost about three times as much. Conceivably nobody has made this observation before, and quite probably it is a correct observation, so it is likely correct and, in a minimal sense, original. But it really matters very little whether the price ratio was 1:2 or 1:3 or 1:4 in Argentina in 2017, though conceivably that may someday be of interest to some historian of concrete; this knowledge enhances your capabilities very little.
At the other end of the spectrum, consider Becquerel’s observation in 1896 that, even in the dark, potassium uranyl sulfate blackened photographic plates left nearby, as if they were spontaneously emitting X-rays, which of course they were. The observation was hardly more creative than my note above about the ratio of prices of different kinds of cement, merely an observation of an unexpected and unexplained labwork problem in a footnote of a paper. However, upon further investigation, this observation answered the mystery of how the sun could keep burning for billions of years; provided a source of energy that did not emit CO₂ and required a tiny amount of fuel to a substantial part of humanity; made it possible to send space probes to the outer planets; changed the nature of warfare and ended World War II; revealed the existence of entirely unsuspected types of matter in the universe; and was a key part of the evidence for special relativity, which revealed that mass and energy were not two separate quantities, but the same quantity.
But the consequences of an idea are very situational, whether we’re talking about its logical consequences (the other propositions that its truth would entail) or its practical consequences (the results in the contingent world of its putative truth becoming known).
From the proposition, “Socrates is a man,” we cannot deduce that Socrates is mortal; nor can we deduce it from the proposition, “All men are mortal.” But if either proposition is known, the other has as a logical consequence the proposition that Socrates is mortal. So it is that the logical consequences of an idea depend on what else is known.
The practical consequences of Hero’s aeolipile were, famously, almost nil; but under somewhat different historical circumstances, steam-engines revolutionized industry in the 18th century. Condorcet voting made no impact on the USA’s political processes for at least two centuries, and the USA continues electing incompetent demagogues; Condorcet voting ensures that Debian’s project leaders are widely respected. Oil drilling in the Song dynasty lowered the price of salt; oil drilling in Pennsylvania made horse-drawn carriages obsolete. Cellphones in the US were relegated to executive status toys until the 2000s; cellphones in India allowed farmers and fishermen to capture what were previously middlemen’s profits. Movable type in the Song enabled the preservation of much of the Chinese literary canon, while movable type in Europe gave birth to the Reformation, liberalism, and the Westphalian state. So it is that the practical consequences of an idea depend on what else is practiced.
Here I’m not concerned with the practical consequences of “big ideas” that turned out to be false, like the inevitable withering away of the socialist state or the inevitable triumph of Daesh over “Rome”, but only ideas whose consequences would be big if true.
So, what ideas in Dercuano could have big consequences, if they turn out to be correct? And why?
One of the main themes of the last several years of Dercuano has been “clanking replicators” — more precisely autotrophic programmable self-replicating 3-D printers, and especially how to achieve autotrophic replication of the control computer necessary to control the printer’s actuators.
A workable self-replication design is big, if true, because it totally upends the principles of economics, in a way which I think will substantially improve the material well-being of the average human by reducing opportunities for oppression. I think the change will be more important than the Industrial Revolution, more important than the development of agriculture, possibly more important than fire. I go into somewhat more detail on the expected economic effects in Exponential technology and capital, Gardening machines, and Self replication changes, and on how to prevent disasters in Approaches to limiting self-replication, and there’s a fictionalized near-future scenario of less-radical digital fabrication technology in 2025 manufacturing and economics scenario.
However, on looking at Predictions for future technological development (2008), it’s obvious that my ability to forecast what the future holds is pretty poor, and strongly affected by wishful thinking.
The benefit of self-replicating 3-D printers in practice will be limited by the price of energy, whether that price is measured in a conventional way with currency or in more fundamental terms of natural resources, labor power, and capital investment; but energy should become much more abundant soon due to the uptake of solar photovoltaic energy — see the section below on the solar energy transition.
The problem of self-replication can be crudely divided into the problem of designing a cyclic fabrication system, a term I’m possibly abusing to mean a set of material-processing, part-forming, and assembly processes which individually consume one another’s outputs but collectively consume only natural materials; and the special problem of how to put together a computing system that’s fast and reliable enough to direct the cyclic fabrication system to produce the desired product, without requiring exotic materials and geometries those processes can’t themselves produce. In particular, alternatives to the very challenging processes used to fabricate modern mass-produced semiconductors would be very welcome, keeping in mind that the economics are very different.
An overview of the whole problem is in Simplified computing, down to the level of mining raw materials.
So I explored alternative digital logic technologies in mechanical computation: with Merkle gates, height fields, and thread, Nobody has yet constructed a mechanical universal digital computer, Ideas to ship in 2014, Simple state machines, An extremely simple electromechanical state machine, Steampunk spintronics: magnetoresistive relay logic?, Digital logic with lasers, induced X-ray emission, and neutron-induced fission, for femtosecond switching times?, Making a mechanical state machine via sheet cutting, Transmission line diode computation, Diode logic, Snap logic, Hall-effect Wheatstone bridges for impractical steampunk electronic logic gates, Nonlinear differential amplification, Paper/foil relays, and Non-inverting logic, largely with an eye to things that could be built without million-dollar semiconductor fabs. Clanking replicators touches on this a bit too.
In another direction, though, the topic control largely talks about negative-feedback control, including speculative sensor approaches like Charge transfer servo, Starfield servo, and Servoing a V-plotter with a webcam?, as well as codesigning physical and control systems for feedback control in High-precision control of low-stiffness sytems with bounded-Q resonances; and things like Differential spiral cam cover control systems that aren’t purely digital, which could reduce the demands on the digital part of the system.
When it comes to the materials-processing side of things, I’ve done some overviews like 2016 outlook for automated fabrication and 3-D printing and much of the notes in The book written in itself. I’ve come to the conclusion that Minecraft is misleading; you start with fire, then clay. Any practical terrestrial cyclic fabrication system will probably begin with clay ceramic. So in addition to the materials category, there’s a ceramic category, and Clay fabrication objectives talks specifically about what to do for clay, and Flux deposition for 3-D printing in glass and metals and 3-D printing by flux deposition talk a bit about some processes that I think might work well. More broadly, the manufacturing category has notes on many different manufacturing processes, and digital fabrication has notes on digital fabrication processes, some existing and some speculative. Elastic metamaterials talks about workarounds for the limitations of inorganic materials at room temperature, while Plastic cutters describes a way to minimize the amount of very hard material needed if cutting is one of the processes in the CFS.
Other notes on existing or possible material-shaping processes include Hot wire saw, String cutting cardboard, Hot oil cutter, Regenerative fuel air cutting, Laser ablation of zinc or pewter for printed circuit boards, Filling hollow FDM things with other materials, Hot air ice shaping, Friction-cutting plastic, Single-point incremental forming of aluminum foil, and Sun cutter. Freeze distillation at 1 Hz is a possible material-refinement process, and Spark particulate sieve covers a possible way to make an air or water filter or mesh for grading solid powders. Cold plasma oxidation describes a process that is commonly used today for surface treatments, but which I think can also be used for some kinds of cutting and 3-D printing. And at the end of Caustics and in You can’t construct optical systems with arbitrary light transfers, but you can do some awesome shit there are some brief speculations on optical-surface fabrication.
Assembly processes like those explored in Maximal-flexibility designs for printable building blocks are useful not just for humans, but also potentially for machines, as they can produce macroscopic tight tolerances using low-precision assembly processes. “Voxel printers” is a recent marketing buzzword related to this.
So, in these areas, what in Dercuano might be an idea with big consequences, such as enabling autotrophic self-replication? Because probably Laser ablation of zinc or pewter for printed circuit boards isn’t it — I mean, even if it does work, it’s probably only an incremental improvement.
The family of 3-D printing processes described in 3-D printing by flux deposition is applicable to many areas and should extend the range of additive 3-D printing significantly. By my count, at present, it discusses some 27 candidate combinations of materials; I have confidence that at least some of them should work. If they are tried soon, and work, that could be a significant advance; presumably if nobody tries them until 2152 it will be a different story.
If one of the numerous alternatives to semiconductor logic mentioned above works reliably, can work at at least a MHz or so, and can be fabricated under less demanding conditions, that would also be big, if true — again, if tried soon enough. Again, there are enough of them that it’s almost guaranteed that some of them will work.
Self-replicating machines will probably need optics, at least for cameras (see the section about sensors below) and quite likely also for solar furnaces. But existing approaches to optics fabrication are very expensive, especially for surfaces far from sphericity.
So there are several notes in Dercuano that propose new optics-fabrication processes: Jello printing, Caustics, You can’t construct optical systems with arbitrary light transfers, but you can do some awesome shit, and Flux deposition for 3-D printing in glass and metals. Any of these would be a substantial advance over existing methods if they work, and this would have significant consequences for achievable optics, entirely apart from self-replication.
The sensors category right now consists of five big ideas.
Starfield servo outlines a way to make some simple physical objects and less-simple algorithms that would enable a cheap webcam to become a remote multiple-degree-of-freedom sensor with, in some dimensions, submicron resolution over a range of a few meters. If it works, and I think it will.
Compressed sensing microscope describes the same technique applied to light microscopy, where it should enable subwavelength near-field imaging without lenses.
Measuring submicron displacements by pitch bending a slide guitar outlines a totally different way to measure submicron displacements over a range of a few meters with inexpensive equipment — electric-guitar pickups, this time, rather than webcams.
The Tinkerer’s Tricorder outlines a variety of hacks to build an inexpensive LCR meter similar to the popular M328.
Ghettobotics: making robots out of trash (and category ghettobotics) explores how to build
a self-sustaining industrial economy that consumes nothing but discarded electronics and other trash and produces, with a minimal amount of human effort, useful robots.
It’s sort of Self Replication Lite™.
The archival category covers lots of possible ways to archive the humans’ knowledge to keep it from being lost, at many levels of the stack: physical substrates for information, ways of mass-producing the physical substrates to reduce chances they will all be destroyed, file-format compatibility, and archival virtual machines to guarantee file-format compatibility.
So, for example, Atmospheric pressure harvesting phoenix egg describes a power source that enables you to build a computer that could continue to run for centuries even while buried, barring too many hardware failures; Archival of hypertext with arbitrary interactive programs: a design outline discusses how to structure interactive hypertext to make archival possible, as do Instant hypertext and Kogluktualuk: an operating system based on caching coarse-grained deterministic computations.
Some extensions of William Beaty’s scratch holograms describes a way to archive large amounts of information on inexpensive, durable materials in a way that the humans can read without needing a working computer, as do Caustics, Data archival on gold leaf or Mylar with DVD-writer lasers or sparks, Rosetta opacity hologram, Holographic archival, Piezoelectric engraving, Quadratic opacity holograms, Archival transparencies, and A mechano-optical vector display for animation archival.
In between those approaches, there’s the possibility of archiving large amounts of information in an executable digital form, but providing a specification for an “archival virtual machine” that can execute the archived information, as proposed by Raymond Lorie and by Nguyen and Kay’s “Cuneiform Tablets” paper. Attacks on this problem include Bootstrapping instruction set, A simple virtual machine for vector math?, Lisp 1.5 in a stack bytecode: can we get from machine code to Lisp in 45 lines of code?, Designing an archival virtual machine, XCHG: An Archival Swap Machine, Archival with a universal virtual computer (UVC), and The Dontmove archival virtual machine.
As an alternative to making time capsules to bridge periods of time when the humans are uncooperative, we might be able to preserve history by enlisting their cooperation; Viral wiki discusses one approach to that.
If one or more of these approaches is successful at rescuing the humans’ knowledge from the Digital Dark Age, that would indeed be Big. (But it’s not clear how much of that depends on correctness; it probably depends more on implementation effort.)
In Robust local search in vector spaces using adaptive step sizes, and thoughts on extending quasi-Newton methods it is claimed that quasi-Newton methods require maintaining in memory an approximation of the Hessian, while perhaps a similarly quadratic order of convergence can be obtained with just the gradient by using Newton–Raphson iteration along the direction of the gradient, while gradient-descent methods only have linear convergence. If all of that is true, which is unlikely, then it contains a numerical optimization method that is many orders of magnitude faster than the state of the art in high-dimensional spaces.
More generally, I think mathematical optimization is a significant candidate for including in More thoughts on powerful primitives for simplified computer systems architecture as a basic element of computer systems design; $1 recognizer diagrams gives an example of how you can use it to replace ad-hoc procedural algorithm design with something much simpler, which I’m pretty confident will work.
In Heat exchangers modeled on retia mirabilia might reach 4 TW/m³ it is claimed that a particular three-dimensional fractal design for a recuperator-type heat exchanger could provide recuperators with orders of magnitude higher performance, rivaling that of regenerators. This could be a crucial enabling technology for many kinds of thermodynamic machines, including heat engines (possibly including micro-turbine generators) and climate-control systems (for example, A design sketch of an air conditioner powered by solar thermal power).
One of the largest changes in the material culture of the humans during the 21st century will be the transition away from fossil fuels as their main source of harnessed energy, since the alternative is global warming that may be sufficient to cause a mass extinction; right now it looks like they’ll change to solar photovoltaic energy during the late 2020s.
Since most of the resource cost of producing photovoltaic panels is an energy cost, but their EROEI is already quite high, this probably means a rapid exponential growth in the amount of energy available to be harnessed for human activities. This will make energy much cheaper than it’s ever been.
The economics of solar energy is a somewhat dated overview of the basic issues, which are also discussed in The future of the human energy market (2014), Japan can achieve energy autarky via solar energy, but not much before 2027, and parts of Notes and calculations on building luxury underground arcologies for whoever wants them.
One of the predictable effects of abundant marketed energy is cheaper desalination and an end to water stress. See A quintuple-acting vacuum cascade to recycle heat for more efficient distillation and desalination, Fast sea salt evaporator, and Calculations about desalination in Israel.
Among the more dramatic results of this transition is that, until there are intercontinental HVDC lines or breakthroughs in utility-scale energy storage, energy is going to be a lot cheaper during the day than at night, which means that “demand response” is going to be really important for taking advantage of the available energy. In Salt slush refrigeration and Household thermal stores it is discussed how to do household refrigeration and climate control in a demand-response-friendly way.
It is, however, possible to build enough storage with existing lithium battery technology to sustain current energy usage levels during the night; Terrestrial lithium supplies provide adequate energy storage to reach Kardashev Type 1 discusses the available resources, and Energy storage efficiency discusses the economics in more detail.
The evils of fractional-reserve banking are a favorite hobbyhorse of economic cranks and conspiracy theorists. I don’t think it’s evil, and moreover I think the standard economic-crank position dramatically overstates its importance, but it does have some real problems, such as bank runs, and I think that now we can do better; Replacing fractional-reserve banking with a bond market disintermediated with a blockchain explains how.
A dismaying quantity of current computer software amounts to ways of caching parsing results because parsing is so slow. One attack on this problem is to use a data structure serialization format like FlatBuffers that permits random access; another is to use faster parsing algorithms. In Profile-guided parser optimization should enable parsing of gigabytes per second I suggest ways to increase parsing speeds to a sufficiently high level that much of that caching code can be thrown away. They might work.
The old joke is that there are three hard problems in computer science: naming, cache invalidation, and off-by-one errors.
“Cache invalidation” is the process of determining when some cached result should be updated, which is a very general concept, and different kinds of caches are ubiquitous in computer systems architecture, at every layer from RTL design up to container orchestration, for reasons that include improving throughput, protecting privacy, tolerating faults, reducing average latency, and reducing worst-case latency. The vast majority of complexity in computer systems does in fact amount to logic that manages different kinds of caches.
I have found several promising ways to unify many, though not all, of those caches in a single caching subsystem, which will dramatically simplify computer systems design at the same time as dramatically improving performance, if one of them works.
In A minimal dependency processing system, Fault-tolerant in-memory cluster computations using containers; or, SPARK, simplified and made flexible, Kogluktualuk: an operating system based on caching coarse-grained deterministic computations, Automatic dependency management, and Immutability-based filesystems: interfaces, problems, and benefits, I discuss ways to architect computer systems that simplify this problem; Transactional screen updates, Caching screen contents, and Cached SOA desktop focus specifically on the problem of GUI caching, because it’s a particularly demanding aspect of the problem that illuminates it from a particularly useful angle. In Memoize the stack and Amnesic hash tables for stochastically LRU memoization I discuss particular generic algorithms that might be useful.
More generally, the topic “caching” covers many different aspects of the problem.
In Paper/foil relays I describe an electrostatic relay design that might be feasible at millimeter scale and below to get reasonably-fast digital logic without any advanced materials processing. Unlike electromagnetic relays, these work better at smaller scales.
In Real-time bokeh algorithms, and other convolution tricks I explored a number of algorithms for simulating camera bokeh and discovered a general convolution algorithm for kernels with a small number of discrete multiplier values which occur in large contiguous blocks — such as, for example, camera bokeh kernels for ideal lenses. If it works, it’s an order of magnitude or more faster than any previously published algorithm for this problem, beating even McGraw’s approximate algorithm (although it can handle cases with spherical aberration, which my algorithm can’t.)
In Dehydrating processes and other interaction models I propose a sort of taxonomy of computer systems architectures based on something called “interaction models”, which has to do with the relationship between individual programs and the rest of the system. It could stand to be sharpened up a bit.
Historically, human–computer interaction has mostly been through a keyboard, screen, and mouse. The screen provides perhaps 10 megabits per second of output bandwidth, while the mouse provides perhaps 50 bits per second of input bandwidth (but limited to about 6 bits per second in practice, according to the experiments in Some musings on applying Fitts’s Law to user interface design and data compression), and the keyboard another 15 or so, for a total of about 25 bits per second. While this vast disparity hasn’t been much of a limitation for consumption-type activities like watching the Gangnam Style video or playing Flappy Bird, it’s a major limitation for using the computer as a means for creative expression — “magic ink” or a “bicycle for the mind”, in the phrases of Steve Jobs and Bret Victor. And it is ultimately through creating great things, not consuming great things, that the humans can become great themselves.
Multitouch input devices have dramatically higher input bandwidths than mice or keyboards, so in theory they might offer dramatically greater expressive freedom to computer users. Sadly, despite some promising prototypes demonstrated by Bret Victor and other researchers, the humans mostly interact with them by scrolling vertically with one finger, selecting pre-existing options by tapping on them, or using an on-screen keyboard.
We could escape this multitouch Skinner-box Sheol with the ideas in Two-thumb quasimodal multitouch interaction techniques, Interactive calculator, drag-and-drop calculator for touch devices, Interactive geometry, $1 recognizer diagrams, Multitouch livecoding, and Dercuano drawings, if those ideas work out.
Maybe you can build a synthesizer kids of all ages can play by humming into it, and that would be a big hit; that's what The Magic Kazoo: a synthesizer you stick in your mouth is about.
In Ultralight tunnel personal rapid transit I calculate the performance of a new kind of rapid transit system, one so much more efficient and frugal that it would enable entirely new kinds of urbanization, combining the benefits of suburbia with the benefits of dense cities; Notes and calculations on building luxury underground arcologies for whoever wants them speculates on the kinds of sustainable, resilient community living that could result.