My undergraduate university had a graduation requirement of 45 units of analytical math, which I greatly resented because that was time better spent getting high. I already knew that I was either going to spend my life building computers or redeeming aluminum cans for nickels, and partial differential equations really weren’t a prerequisite for either.
THINK AGAIN.
Some time after graduation, I visited a classmate who recently completed his PhD in math and asked him what it is math people do for a PhD.

“We choose some unproven theorem, and then figure out how to prove it.”
“What if you choose something ridiculous, like Fermat’s Last Theorem? You could be stuck in grad school for centuries!”
“Right, hopefully your graduate advisor tells you not to choose Fermat’s Theorem.”
Andrew Wiles actually proved Fermat’s Theorem in 1993, but it took 358 years of mathematical effort.
It started in 1637, when Pierre de Fermat wrote the following equation in the margin of a text, and stated that no three positive integers , , and satisfy the following equation for greater than 2.
For hundreds of years, mathematicians made unsuccessful attempts at a proof. In 1984, Gerhard Frey observed that if Fermat’s Theorem has a solution, then it can be shown that the following elliptic curve is not modular:
Elliptic curves are important tools because the solution set forms an Abelian group, where you can add two points on the curve and get another point on the curve. This fact makes them useful for defining groups with particular properties, like secp256k1 for the digital signature algorithm used in Bitcoin.
By proving that the subset of elliptic curves that includes Frey’s equation must be modular, Wiles proved Fermat’s Theorem.
Proving a mathematical theorem, or constructing nonobvious examples, involves taking some set of underlying knowledge and putting it together in a new way. If you get stuck, it’s either because you were not aware of enough pieces, or you had all the pieces but couldn’t figure out how to fit things together. Determining which problem you have is the process of hacking a system.
In The Mathematical Hacker, Evan Miller brings up the example of a Fibonacci calculator.
Write a function that generates the nth number of the Fibonacci sequence (1,1,2,3,5,8...)
.
Here’s a solution that uses recursion:
def Fib(n): if n <= 1: return n else: return Fib(n1) + Fib(n2)
Neat. Now pretend you’re an HFT programmer and you need that function to calculate Fibonacci ratios. The code above would totally get you fired because it has to generate every single number up to n, and by the time it’s finished ten nanoseconds have elapsed and some guy in Weehawken just ate your lunch.
This is an imaginary scenario because algo traders probably hard code their Fibonacci ratios if they use them at all, but the point is that it’s possible to perform the calculation in logarithmic time:
def Fib(n): return (math.pow(0.5 + 0.5 * math.sqrt(5.0), n) – math.pow(0.5  0.5 * math.sqrt(5.0), n)) / math.sqrt(5.0)
In other words:
Did the Fibonacci function actually need recursion, or were we simply not aware of the closedform solution? The trend in computer languages and tools is to create an environment where programmers never have to think about questions like that. That’s a fine strategy for enterprise software development, but it fails when it comes to problems with actual resource constraints.
The smartest math majors I knew in college are now employed as quants and algo traders on Wall Street. I guess that’s a good sign for our capital markets, or maybe possibly not.
FWIW, exponentiation is not a constanttime operation.
Also, I’m a math PhD working for a Bitcoin company. So that’s fun.
Another fun example of resource constraints requiring math and leading to neat programs is the fast inverse square root https://en.wikipedia.org/wiki/Fast_inverse_square_root
Logarithmic time, yeah?
Cool, thanks for the example!
…just came from reading math \cap programming, hey Jeremy!