How Computer Programming Revolutionized Physics Research
There is an interesting history behind the relationship between computer programming and physics, at least in astrophysics and nuclear physics.
In the late 1950’s when computers were first starting to become more prominent in academic research, many physicists saw the potential that computers had in doing calculations: computers were able to perform calculations more efficiently than a regular human could, and without getting tired.
Many problems in physics unfortunately do not have exact, analytical solutions. In fact, many of the problems we learn about in school often represent special cases where an exact answer can be calculated using pen and paper with very specific tricks and fancy footwork.
Fortunately, thanks to mathematicians, we do have a variety of tools in the form of numerical methods that can approximate, often to a very accurate degree, solutions to many of the problems in physics. The only drawback is that these techniques require numerous calculations to be done over and over again - maybe thousands, or even millions of times - before a solution is arrived at.
Back then, if someone wanted to perform a numerical integration technique in order to solve or model a problem, they would have to employ a team (or an unlucky individual) to sit in a room and spend their days performing the calculations over and over again. Here is a quote from “Structure and the Evolution of Stars” by Martin Schwarzchild (1958) on solving the equations of stellar structure:
A person can usually accomplish more than twenty integration steps per day for a set of differential equations… Thus for a typical single integration consisting of, say, forty steps less than two days are needed… the entire numerical work for this fairly typical case can be accomplished by one person in one month.
Two days doesn’t seem that bad, but an entire month? To make it worse, he was only talking about the relatively simple case solving for stellar structures:
However, if extensive evolutionary model sequences including a variety of physical complications are to be derived, then numerical integrations by hand may become prohibitive and the advantage of large electronic machines will be incontestable.
It’s clear the advantage that computers had in being able to do calculations much more efficiently. Fortunately, the rise of computers also seemed to coincide with the rise of the need for work in numerical methods. Many physicists saw the opportunity early on and took advantage.
To quote Martin Schwarzschild in one of his papers in 1954, “Numerical Integrations for the Stellar Interior”:
It seems not unlikely that in the future much of the numerical work in the theory of the stellar interior will be done on large electronic computers.
Since then, physicists have had a close relationship with computer programming. Today, it’s rare to call yourself a physicist without having some kind of knowledge of programming, even just in Matlab. Many physics departments require their students to take at least one course dedicated to scientific computing, with many other physics and math courses actually being programming courses in disguise.
As an undergraduate student, I took a course on Stellar Structures, where we did the exact same numerical integration that Martin Schwarzchild did in his 1954 paper, except instead of taking a month to get a solution by hand, we were able to get the solutions in less than a second.
Adapted from my answer to a question on Quora.
Also published on Medium