One line thoughts:
How do you get rid of the ozone from a plasma garbage incinerator? Platinum?
A text editor could highlight words by their information content, and color them by their context.
You can cluster sequence items iteratively by considering the entropy of the sequence under that clustering: a clustering that allows a better predictiom of the sequence item following it is a better clustering.
Relatedly, the Viterbi algorithm gives you a probability distribution of the next item in a sequence, given a hidden Markov model for that sequence. This gives you an optimization problem for the hidden Markov model; solving the optimization problem would give you the HMM that best models that sequence (under whatever constraints).
watch(1) has a -d
option to highlight parts of the screen that have
changed since the last update. A generalization of this would be to
dim screen items according to how long they hadn’t updated in.
As a way to produce interesting shapes, for example for fonts, how about third-derivative-continuous splines that pass through random grid points at evenly spaced time intervals? This space is small 6 enough that you could exhaustively search it: 4 points chosen (with replacement) from a set of 4, for example, gives you 4⁴ = 256 glyphs; chosen from a set of 6, you get 6⁴ = 1296; and if you choose 5 from a set of 6, you get 7776.
Another way to produce interesting shapes, for example for fonts: how about triangular Wang tiles? A complete triangular Wang tile set with two edge-colors could consist of three tiles plus their rotations; the contents of every two such tiles could be encoded in three bits.
Does Bayesian inference in general produce a probability model that, in some sense, minimizes the entropy of the observations and priors that went into it? That is, if you start with some priors and then update them Bayesianly from some observations, you get some probabilistic model. Given a probabilistic model, you can measure the entropy of a set of observations. Does Bayesian inference minimize that entropy? It would seem that maximum-likelihood estimation (which is not Bayesian!) minimizes it.
Given a desired OTF, the cheapest dataflow graph of convolutions to produce it is the kind of thing you can solve with an optimizer.
Can we improve the readability of text by running it through a spatial FIR filter matched to the spatial-frequency response of the human visual system?
Where did “This is why we can’t have nice things.” and “Do you want ants? This is how we get ants.” come from? http://knowyourmeme.com/memes/this-is-why-we-cant-have-nice-things says, “One of the earliest notable mentions came from Paula Poundstone, an American stand-up comedian who used the phrase in her HBO stand-up special, Cats, Cops and Stuff (1990).[1][2]” But http://knowyourmeme.com/memes/do-you-want-ants is apparently from 2009.
Has someone made a DIY vinyl cutter and documented the process online? Yes, http://www.instructables.com/id/DIY-CNC-Graphics-cutter-hack/ http://www.instructables.com/id/Printer-to-vinyl-cutter-hack/ http://hackedgadgets.com/2009/01/18/diy-vinyl-cutter-from-a-hp-draftmaster-rx-pen-plotter/.
An amusing refactoring is to insert a call+ret into the middle of a function, jumping to just after the ret, effectively converting the tail of the function into a new function. If you omit the ret, it runs the tail of the function twice.
Can you optimize an RNN (including hyperparameters like depth!) to produce a probability distribution of the next character of a text, and thus get good data compression? Can you beat PPM for the Hutter Prize this way? Nobody has won since 2009: “executable of size S < L := 15’949’688 = previous record. 50’000€×(1-S/L). Minimum claim is 1500€.”. PAQ8 (the record holder since the beginning of the prize in 2006) already combines probabilities from different models using an ANN. I’d have to beat it by 3%: 15’471’197 bytes or less. Seems maybe doable.
Hey, FIR filters are almost the same as linear homogeneous recurrences. They’re just not recursive.
Did GW-BASIC have erf()?