Posted by Brent Gregory on November 21st, 2013
43% of assumptions are incorrect.
87% of perceptions are distorted by cognitive bias.
52% of statistics are made up.
With so much wrong-headed confusion, how do we tell which way to go?
Experiments. I love experiments.
Pinewood Derby is a competition where kids build 7-inch (18 cm) cars and race them down an inclined track. Theories abound: Weight concentrated in the front will pull the car faster. Balance the weight at the center for better stability. Put the weight in the rear for more potential energy. After endless arguments, I built a car with moveable weight and let everyone see for themselves which placement gives the fastest car. (The experiment showed that the center of gravity should be about 1-inch (2 cm) ahead of the rear axle.)
Good experiments cut through the intellectual fog created by wrong assumptions and biased perceptions to show what really works. Nowhere is this process more clear than Electronic Design Automation (EDA), the Grand Central Station of NP-Complete problems. The problems are NP-Complete and very large, so optimality is often out of reach. But, some algorithms get closer to optimum than others. Picking the best triggers heated debate.
All key EDA decisions are settled by experiment. Which vendor has the best tool? Run a benchmark. Which settings should we give to the EDA tool? Try them all, and pick the best. Which algorithm should we use inside the tool? Run many experiments to see what works best.
If you’re a programmer like me, then all this focus on experiments is a win. I have to write good software. But, I don’t have to spend much energy convincing other people that it is good. The experiments do that for me.
What is your seniority? How well-known are you? How convincing is your rhetoric? It doesn’t matter if you work in area where success is governed by experiments. The only thing that matters is: How good is your software? I love working in that kind of environment. Interested?