A *flop* is without doubt one of the elementary arithmetic operations , , , carried out on floating-point numbers. For instance, evaluating the expression takes three flops. A sq. root, which happens occasionally in numerical computation, can also be counted as one flop.

For example, the computation of the inside product of two -vectors and may be written

s = x(1)*y(1) for i = 2:n s = s + x(i)*y(i) finish

which requires multiplications and additions, or flops. A matrix multiplication of two matrices entails computing the inside merchandise of each row of with each column of , that’s inside merchandise, costing flops. As we’re normally serious about flop counts for big dimensions we retain solely the very best order phrases, so we regard as costing flops.

The variety of flops required by a numerical computation is one measure of its price. Nonetheless, it ignores the price of information motion, which may be as massive as, and even bigger, than the price of floating-point arithmetic for big computations carried out on fashionable machines with hierarchical reminiscences. However, flop counts present a helpful solution to evaluate the price of competing algorithms when a lot of the flops happen in related kinds of operations, similar to matrix multiplications, they usually can be utilized to decide on between algorithm variants (López et al., 2022).

This desk summarizes some flop counts, the place is a scalar, are -vectors, and are matrices, and is computed by way of LU factorization.

Computation | Value |
---|---|

flops | |

flops | |

flops | |

flops | |

flops |

Because the desk suggests, most traditional issues involving matrices may be solved with a price of order flops or much less, so the curiosity is within the exponent (, , or ) of the dominant time period and the multiplicative fixed. For matrices there may be a number of probably dominant phrases and evaluating competing algorithms is extra difficult.

## Early Utilization

Moler and Van Mortgage give a distinct definition of “flop” of their traditional paper “Nineteen Doubtful Methods to Compute the Exponential of a Matrix” (1978), and their definition was adopted within the `flop`

command within the authentic Fortran model of MATLAB, which counts the variety of flops executed in the newest command or since MATLAB began. Within the 1981 MATLAB Customers’ Information, Cleve Moler explains that

One flop is one execution of a Fortran assertion like

`S = S + X(I)*Y(I)`

or`Y(I) = Y(I) + T*X(I)`

. In different phrases, it consists of 1 floating level multiplication, along with one floating level addition and the related indexing and storage reference operations.

This authentic definition tried to account for indexing and storage prices within the laptop implementation of an algorithm in addition to the price of the floating-point arithmetic. It was motivated by the truth that in linear algebra computations most arithmetic seems in statements of the shape `s = s + x*y,`

which seem in evaluating inside merchandise and in including one a number of of a vector to a different.

Within the *LINPACK Customers’ Information* (1979, App. B) flops have been used to normalize timing information, however the phrase “flop” was not used.

The widespread adoption of the flop was ensured by its use in Golub and Van Mortgage’s ebook *Matrix Computations* (1981). Within the 1989 second version of the ebook a flop was redefined as in our first paragraph and this definition rapidly turned the usual utilization. The Fortran statements given by Moler now contain *two* flops.

The MATLAB `flop`

operate survived till round 2000, when MATLAB began utilizing LAPACK instead of LINPACK for its core linear algebra computations and it turned not possible to maintain monitor of the variety of flops executed.

It’s fascinating to notice that an operation of the shape `S + X(I)*Y(I)`

that was used within the authentic definition of flop is now supported within the {hardware} of sure processors. The operation known as a *fused multiply-add* and is completed in the identical time as one multiplication or addition and with one rounding error.

## Flops as a Measure of Pc Efficiency

One other utilization of “flop”, within the plural, is in measuring laptop efficiency, with “flops” that means floating-point operations per second and having a prefix specifying an influence of ten. Thus we’ve the phrases megaflops, gigaflops, by to exaflops ( floating-point operations per second) for immediately’s quickest computer systems.

The earliest quotation given within the Oxford English Dictionary for this use of the phrase “flops” is in a 1977 paper by Calahan, who writes

The commonest vector efficiency measure is the variety of floating level operations per second (FLOPS), obtained by dividing the variety of floating level operations—identified from the algorithmic complexity—by the computation time.

The time period “MFLOPS” for “million floating level operations per second” is utilized in an earlier Cray-1 analysis report (Keller, 1976).

If you’re conscious of any earlier references please put them within the feedback under.

## Dictionary Definitions

The phrase “flop” seems normally dictionaries, however there may be some variation in whether or not it seems beneath “flop” or “flops”, in capitalization, and whether or not it means an operation or an execution fee.

Dictionary | Time period | Definition |
---|---|---|

Oxford English Dictionary (on-line, 2023) | FLOP | “A floating-point operation per second” |

Concise Oxford English Dictionary (eleventh ed., 2004) | flop | “Floating-point operations per second” |

The Chambers Dictionary (thirteenth ed., 2014) | flop | “A floating-point operation” |

Collins English Dictionary (thirteenth ed., 2018) | flops or FLOPS | “Floating-point operations per second” |

The American Heritage Dictionary of the English Language (fifth ed., 2018) | flops or flop | “A measure of the pace of a pc in operations per second, particularly operations involving floating-point numbers” |

Merriam-Webster (on-line, 2023) | flops | “A unit of measure for calculating the pace of a pc equal to 1 floating-point operation per second” |

## Notes and References

I thank Jack Dongarra, Cleve Moler, and Charlie Van Mortgage for feedback on the historical past of flops.

- D. A. Calahan, Algorithmic and Architectural Points Associated to Vector Processors, in: Massive Engineering Methods. Proceedings of the Worldwide Symposium held on the College of Manitoba, Winnipeg, Manitoba, Canada August 9–12, 1976, Alvin Wexler, ed., Pergamon Press, Oxford, 1977.
- T. Keller, CRAY-1 Analysis. Remaining Report, Report LA-6456-MS, Los Alamos Scientific Laboratory, Los Alamos, New Mexico, USA, December 1976.
- Francisco López, Lars Karlsson, and Paolo Bientinesi, FLOPs as a Discriminant for Dense Linear Algebra Algorithms, in Proceedings of the 51st Worldwide Convention on Parallel Processing, ACM Press, 2022.
- Cleve B. Moler and Charles F. Van Mortgage, Nineteen Doubtful Methods to Compute the Exponential of a Matrix, SIAM Rev. 20(4), 801–836. 1978.

## Associated Weblog Posts

This text is a part of the “What Is” sequence, accessible from https://nhigham.com/index-of-what-is-articles/ and in PDF type from the GitHub repository https://github.com/higham/what-is.