Hawk-Eye line detection in tennis was not inevitable
There could have been many ways to auto-detect line calls in tennis. We picked one. But not because it was technically the "best," at least not in the beginning.
The other day, the tennis reporter Christopher Clarey posted that it was the end of an era: automatic line-calling (ALC) would finally come to the most traditional of the tennis majors, Wimbledon.

Was this “inevitable,” as Clarey says? I think the answer to that is “yes,” if what you’re thinking is: “as it becomes technically possible, it was inevitable that line-calling—a heavy-duty job requiring constant attention—would eventually be done by machines.”
But did automatic line-calling (ALC) systems need to look EXACTLY the way they do now, with cameras, computer vision algorithms, and fancy visualizations? The company that seems to have won is called Hawk-Eye. Hawk-Eye puts about a dozen cameras on the court, tracks the ball as it moves around with sophisticated machine learning and computer vision algorithms, makes a probabilistic assessment whether the ball is in or out, and then is able to display its “reasoning” through a fancy-schmancy visualization. It’s a system that’s taken millions of dollars to develop and it costs tournaments something like millions of dollars to deploy and use.
Is Hawk-Eye good for the game and good for the world? I would argue it is, though I have one big reservation, but that’s a different question and probably a different post.
But the question of whether automatic line-calling systems would look like THIS was not inevitable at all. In the beginning, as we’ll see, really going back even to the 1990s, there were a variety of possibilities around how automatic line-calling could be done. There were systems that used sound waves, other systems that wired up the lines to detect the ball, in addition to computer vision systems (and even those could sometimes look different than Hawk-Eye).
Why did Hawk-Eye become successful? The answer is not so much that it was technically better than its competitors—at least not in the beginning—but because it offered a much better way to reconcile tennis with its various audiences such as players, coaches, and television stations. Hawk-Eye is not just an automatic line-calling system; it is a surveillance system that can be used to track tennis matches. This, in turn, means that it can be used by players, coaches, journalists, and television stations as a source of analytics. It was the possibilities of Hawk-Eye—its compatibility with different audiences—that made it the eventual winner in the contest between different ALC modalities.
Note that the point here is not that Hawk-Eye was inaccurate compared to other ALC systems (and of course, Hawk-Eye itself has improved and become more accurate over time) but that “accuracy” is the wrong category to think with if you want to understand why this ALC system was successful and not others.
Varieties of ALC systems
In 2003, the governing body for tennis, the International Tennis Federation (ITF) organized a Tennis Science and Technology conference. Of all the papers presented and discussed at that conference, five were on ALC systems. (In addition, if you search on Google Patents, you can find many other patents filed on ALC systems over the years.)
These systems, at least from a cursory look, seem to be of three types based on the methods they use to track the ball:
Beam systems: Beam systems like Cyclops (the earliest ALC system, first installed at Wimbledon in 1980 and at the Australian and US Opens subsequently, and mentioned in Clarey’s tweet) worked by throwing electromagnetic beams from one side of the court to another. The beams were carefully positioned: for Cyclops, one beam is on the service line (parallel to the baseline); the other four beams are outside the line i.e. in the "out" area. By carefully calculating which beams were interrupted, these systems can decide if the ball was in or out. If the ball is out, Cyclops produces a beeping sound in real time.
Wired systems: In wired systems, the lines on the court (or even the balls) get wired up. I call them "wired" systems because they require going into the innards of physical entities like balls and tennis courts. Tennis Electronic Lines (TEL) was an early ALC built by a Dutch company. In TEL, the balls as well as the lines were equipped with electronic sensors. The lines can "sense" the ball when it is close and an algorithm then calculates the "footprint" of the ball. The algorithm tries to take into account the compressibility (i.e. whether the ball is elongated or squished which should therefore affect its footprint) and whether the ball is skidding. TEL was tried at the US Open in 1992 on all the non-service lines. They found that out that in 9% of all close calls, the umpires and TEL arrived at different results.
Another system developed by Signal Processing Systems Inc. in Sudbury, MA, worked by embedding electronic sensors into the court lines. These sensors generated signals when the line was "touched" by a ball; these signals were then turned into sounds which were piped to the line umpires manning that particular line. The line umpires would then use their judgment: they would augment their visual survey of the line with the sound produced by the system in order to call a ball in or out.
Computer vision systems: Last, but not the least, there were computer vision-based ALC systems. In these systems, high-speed cameras would be positioned at different points on the court to calculate the 3-d trajectory of the ball's flight after it leaves the racquet. The image extracted from each camera was processed to obtain the two-dimensional coordinates of the ball and the court lines in the image. The 2-d positions and the relative positions of the cameras are used to compute a 3-dimensional model of the ball's trajectory with respect to the court lines. This, in turn, is used to compute the "foot-print" of the ball on the court, taking into consideration the velocity and the estimated compressibility of the ball. Finally, the 3-d trajectory is used as input to a visualization algorithm that is then shown to the audience as well as the umpire to review. 1
Already, we can see how these systems have different logics in terms of how they fit into the world of tennis.
They require different sorts of installation procedures (which in turn have different costs). Wired systems require courts to be dug up to install sensors or to wire up the balls; computer vision systems do not need that but they do need to have cameras installed around the court which have just the right view.
Different systems combine machine and human decision-making in different ways. The Cyclops system was fully automatic as is the Hawk-Eye Live system that is now used at the US Open. But one of the wired systems I described above essentially took the system’s output to the line umpire who then made a judgement call by combining what he saw and what the system “saw.” And the original Hawk-Eye that we saw in use for over a decade was a review system whose output was generated as needed and shown to the chair umpire (though, to be clear, the chair umpire was not allowed to use judgement to evaluate the Hawk-Eye output).
Computer vision systems have a synoptic view of the court in that they “see” the whole court; the other systems are more focused on particular lines and particular areas of the court although there is no reason why these more local views can’t be combined into one synoptic view.
Computer vision systems have “memory” because they track the ball from its moment of impact. But it’s not clear you need this memory if all you want to do is decide whether a ball was in or out.
Why did computer vision systems win?
Computer vision systems, even in their earliest instantiations, were promised on being more than just ALC systems. They were designed to be (eventually) detailed surveillance systems that would track the ball as it moves around the court, from its contact point with the racquet to its impact on the court. They would record who won the point, and even how the point was won (an ace, a winner, a forehand or backhand, or an unforced error).
Beam and wired systems would have had a much harder time becoming surveillance systems. In their primary form, they were far too decentralized: sensors embedded in balls and lines, localized beams to monitor specific lines. To integrate these fragments into a complete panoptic record seems difficult, if not impossible.
As surveillance systems with a gods-eye vantage point, computer vision systems could function very well as links between the worlds of tennis and television by producing data that would cement the existing links between television and tennis. They would allow the production of even finer statistics and slicker visualizations that television stations could use to attract audiences.2

Another constituency was players and coaches who were interested in using this data to devise game plans and training strategies. As Jason Goodall reported in The Wall Street Journal:
All of this information [from using Hawk-Eye during a match] is stored to be used in the future. To prepare his charge for a match with Rafael Nadal at Wimbledon in 2008, Andy Murray's coach requested the data on where the Spaniard had placed all of his serves in previous matches at the Championships in order to try and discern a pattern and hence give Mr. Murray a possible edge in their quarter-final encounter.
What he learned was that Mr. Nadal had indeed changed his game plan for Wimbledon. On the clay at the French Open that year he hit the vast majority of both first and second serves to the backhand of his right-handed opponents. But on the lawns of the All England Club he sent far more serves at his opponent's forehand; his second serves were regularly fired at their bodies.
The point here is not that computer vision systems were not accurate; the point is that, as systems, they fitted much better into the world of commercial tennis than beam or wired systems. That explains their adoption and take-up more than their accuracy.
The inevitability of ALC systems
Ruth Schwartz Cowan, the eminent historian of technology, has argued that failed technologies, “the rusting hulks of aborted ideas” (p. 127) can tell us a lot about how our world could have been different.
In one of my favorite sections from her book titled “How the refrigerator lost its hum,” she looks at the intertwined history of the electric refrigerator, “the machine that succeeded,” and the gas refrigerator, “the machine that failed.” Both machines, Cowan argues, were technically good enough. But the companies making the electric refrigerator had more money and more drive. They financed the innovations and the production; they spent a ton on marketing and advertising; and they were able to get a very good product into the market quite early. On the other hand, the companies making the gas refrigerator struggled to get money, had a harder time getting it to work, and consequently arrived about 6 years later into the market, by which time, the electric refrigerator had already established itself. The gas refrigerator stuck around for a while but the winner was clear. Thus, we got refrigerators that hummed (electric refrigerators had motors) rather than those that were silent (gas refrigerators had no motor).
As Cowan puts it:
General Electric became interested in refrigerators because it was experiencing financial difficulties after the First World War and needed to develop a new and different line of goods. G.E. decided to manufacture compression, rather than absorption, refrigerators [i.e., electric rather than gas refrigerators] because it stood to make more profits from exploiting its own designs and its own expertise than someone else's. Once having gone into the market for compression refrigerators, G.E. helped to improve that market, not just by its promotional efforts on its own behalf, but by the innovations that it could then sell to, or stimulate in, other manufacturers. And having done all that, G.E. helped to sound the death knell for the absorption machinery, since only a remarkable technical staff and a remarkable marketing staff, combined with an even more remarkable fluidity of capital, could have successfully competed with the likes of General Electric, Westinghouse, General Motors, and Kelvinator.
In other words, even if the home refrigerator was inevitable, that it was the electric refrigerator that became the dominant home model was not. It was a contingent outcome that arose out of certain structural conditions: the electrification of the country, the risk-taking behavior of the electric companies and utilities, the conservative behavior of gas companies, and of course, demand from consumers.
In the same way, while the use of ALC systems in tennis was perhaps inevitable. But the form and shape these ALC systems took—how we got to Hawk-Eye—was as much about the commerce of tennis as it was about the accuracy of the systems.
Hawk-Eye was not among one of the systems mentioned in the 2003 report but that would be the canonical example of a computer vision system. Also, note that computer vision systems were designed more as review systems for umpires and players to review. Hawk-Eye Live, the current real-time system that is going to replace umpires, was not tried until 2017 and by that time, Hawk-Eye had been used on the tennis tour for about a decade. It was the pandemic that led to the quick adoption of Hawk-Eye Live.
While it probably doesn’t exist now, about 10 years ago, ESPN.com created a website called CourtCast that would allow its users to watch a match "live" through a simulated visualization (which was streamed real-time and was a transformed version of Hawk-Eye data). Users could choose various options like seeing a “rally" from different points around the court, viewing statistics about aces, winners, forehands, backhands all visualized in terms of the ball-impact position on court as well as its contact-point with the racket.