Spread the love


Teori Antrian adalah teori dari cabang Matematika khususnya Operations Research yang berkaitan dengan antrian yang timbul karena adanya pelanggan-pelanggan yang memerlukan fasilitas pelayanan.

Teori antrian dipergunakan untuk mengkaji sistem antrian.

Teori antrian merupakan model matematikal dari sistem antrian.

Penjelasan selengkapnya dapat dilihat di diktat saya Teori Antrian yang di terbitkan oleh Universitas Terbuka Program Studi Matematika.

Anda bisa melihatnya di file pdf di bawah ini:


Bisa Anda lihat videonya di bawah ini:


Queuing theory is a branch of mathematics that studies how lines form, how they function, and why they malfunction. Queuing theory examines every component of waiting in line, including the arrival process, service process, number of servers, number of system places, and the number of customers.

Queueing theory is the mathematical study of waiting lines, or queues.[1] A queueing model is constructed so that queue lengths and waiting time can be predicted.[1] Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

Queueing theory has its origins in research by Agner Krarup Erlang when he created models to describe the system of Copenhagen Telephone Exchange company, a Danish company.[1] The ideas have since seen applications including telecommunication, traffic engineering, computing[2] and, particularly in industrial engineering, in the design of factories, shops, offices and hospitals, as well as in project management.[3][4]

Queuing theory (or queueing theory) refers to the mathematical study of the formation, function, and congestion of waiting lines, or queues.

Queuing theory is the study of queues and the random processes that characterize them. It deals with making mathematical sense of real-life scenarios. For example, a mob of people queuing up at a bank or the tasks queuing up on your computer’s back end.

In queuing theory we often want to find out how long wait times or queue lengths are, and we can use models to do this. These models are typically important in business and software applications, and queueing theory is often considered a part of operations research.

The study of systems in which customers, arriving at random and requiring varying periods of service, may have to wait in order to be served. From the number of service points and the probability distributions of arrival times and service times, the distribution of the length of queue and the waiting time before service may be predicted.

Queuing theory has important applications in any system liable to congestion, where the costs of improved service may be balanced against the costs of congestion.

Queuing theory deals with problems which involve queuing (or waiting). Typical examples might be:

  • banks/supermarkets – waiting for service
  • computers – waiting for a response
  • failure situations – waiting for a failure to occur e.g. in a piece of machinery
  • public transport – waiting for a train or a bus

As we know queues are a common every-day experience. Queues form because resources are limited. In fact it makes economic sense to have queues. For example how many supermarket tills you would need to avoid queuing? How many buses or trains would be needed if queues were to be avoided/eliminated?

In designing queueing systems we need to aim for a balance between service to customers (short queues implying many servers) and economic considerations (not too many servers).

The Queuing Theory is concerned with studying all the various dynamics of lines – or “queues” – and how they may be made to operate more efficiently. It is essentially the study of “waiting in line,” including how people behave when they have to queue up to make a purchase or receive a service, what types of queue organization move people through a line most efficiently, and how many people can a specific queuing arrangement process through the line within a given time frame.

a theory that deals with providing a service on a waiting line, or queue, especially when the demand for it is irregular and describable by probability distributions, as processing phone calls arriving at a telephone exchange or collecting highway tolls from drivers at tollbooths.

queueing theory is the mathematical study of queues and waiting lines.

Sejarah Queueing Theory:

A leading subject of study in operations research, queueing theory is the mathematical study of queues and waiting lines. Today’s queueing theory is indebted to the emergence and growth of operations research after 1945 yet the origins of the field predate those of operations research, extending back by some measures to Siméon Denis Poisson’s 1837 work on criminal cases. A more generally accepted starting date is the early 20th century, when university-trained Danish and Norwegian mathematicians first began using probability theory while working at telephone companies. This essay takes the latter perspective, tracing the history of queueing theory from the pioneering work of Agner Krarup Erlang, through the events of the Cold War, to the present.

Queueing theory’s greatest successes in real-world applications have been in telecommunications and data networking. While there have been vigorous attempts to adapt and extend the model to solve for a variety of queueing problems, the complexity of calculations, challenges adapting stochastic methods to various real-world queues, and the need to have useable, timely results have all limited the practical adoption of queueing theory. Despite these challenges the field has achieved a prominent place in ORMS practice today.

Queueing Theory’s Origins: 1900 to 1917

The origins of modern queueing analysis lie in the growth of telephone systems in Denmark and Norway during the early 20th century. There, telephone engineers (themselves university-trained mathematicians) used statistical techniques to estimate capacity requirements for automatic telephone exchanges. With little prior experience on which to base design choices, these engineers used mathematics, especially probabilities, as an aid to their work. The Copenhagen Telephone Company (CTC), formed in 1882, employed a thriving community of university-trained mathematicians thanks to the efforts of its chief engineer, Johan Ludwig Jensen, perhaps best known for “Jensen’s Inequality.” Jensen himself had studied physics, chemistry, and mathematics at Denmark’s College of Technology. As the president of the Danish Mathematical Society, Jensen attracted young mathematics to the CTC, among them Agner Krarup Erlang. While studying problems of estimating telephone exchange capacity requirements under Jensen’s direction, Erlang showed that inbound calls to a switch followed a Poisson distribution, and that a system of lines and calls would achieve statistical equilibrium. In 1917, Erlang published “Solution of Some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges,” which described three formulas used to model call activity. Two of those, the Erlang B (or “blocking” formula) and Erlang C (the “delay” or queueing formula; sometimes called Erlang D) are still in use today. Under the Erlang B regime, a customer who finds all servers (telephone lines) busy, departs never to return.

Under the Erlang C regime, a customer who finds all servers busy, waits in queue until a server is available.

In neighboring Norway, Erlang’s contemporary Tore Olaus Engset also studied physics and mathematics, earning a degree in 1894 from the University of Oslo. At the state-owned Telegrafverket (today’s Telenor), Engset worked on planning for an aggressive expansion of Telegrafverket’s phone service to all Norwegians, as mandated by an 1899 national law. As part of that work he developed a blocking formula, and also addressed other concerns, including customer departures, traffic variations, traffic grading, and the finite source of the customer population. While Engset’s blocking formula found some adoption outside Telegrafverket, his other contributions (which are described in a 1915 report) did not. Other telephone engineers appear to have preferred Erlang’s simpler model and formulas to Engset’s both due to their being easier to calculate as well as the former’s tendency to give results which tended to be more conservative, that is, over-estimate switch capacity requirements.

Congestion Theory and Its Diffusion, 1917-1945

From its beginnings in Denmark and Norway queueing theory (or what was then called the study of congestion problems) spread with the growth of telephone systems in Europe, the United States, and the Soviet Union. Telephone engineers in those nations, seeking mathematical approaches to estimating phone switch capacity, translated Erlang’s articles into English, French, German, and Russian. Before English translations were available, one engineer at Bell Labs learned Danish in order to study Erlang’s work. Much of the work on congestion theory from 1917 to 1927, such as that completed by C. D. Commelin and G. F. O’Dell, centered around confirming the results of Erlang’s formulas with empirical findings. Others, including E.C. Molina and Thornton C. Fry, advocated using Erlang’s formulas – and probability theory in general – to the management of phone systems in the United States. Erlang’s formulae have long been used in setting staffing levels in in-bound telephone call centers, see for example the description by Bruce Andrews and Henry Parsons at the retailer, L. L. Bean.

Modelling work remained limited until after World War II, but there were several key contributions made during the 1930s. Conny Palm studied caller abandonment. Felix Pollaczek and Aleksandr Yakovlevich Khinchin independently arrived at formulas to calculating the mean waiting time of a queueing model with an arbitrary service time distribution (their work eventually led them to form a friendship in the years before the war). There were also some attempts to apply congestion theory to problems outside teletraffic, with T.C. Fry’s The Theory of Probability as Applied to Problems of Congestion (1928) being one example. As tensions turned to open hostilities in Europe, however, the use of probability theory to analyze congestion problems was still limited to telephone system engineering work. The blossoming of queueing theory out of the study of congestion in telephone systems would have to wait for the events of World War II to create a fertile ground.

Operations Research and the “Golden Age” of Queueing Theory, 1945-1975

The rise of operations research during World War II marked a new phase in the history of queueing theory. A new generation of practitioners became interested in the mathematical study of queues. The formation of professional associations, conferences, and survey texts demonstrated a maturing field. The twenty-year long period of economic expansion after the war also seemed to offer numerous opportunities for operations researchers to apply queueing theory in industry, urban planning, highway construction, jet travel, and recreation. These events led some to reflect on the decade and a half after World War II as a “golden age” of queueing theory. However, this enthusiasm was tempered by the fact that, outside of communications, real-world application of queueing theory remained limited.

A key figure in the postwar era is the Oxford-trained mathematician David George Kendall. The Berlin Airlift of 1948-1949 inspired Kendall to consider how probability theory could be used to solve queueing problems. He published several papers on the topic that significantly shaped the future of queueing theory, including the “A/B/C” shorthand notation for describing queues, embedded Markov chains, and the first use of the phrase “queueing system” to describe the mathematical model. In Kendall’s A/B/C notation, the A argument identifies the arrival process, e.g., M for Markovian or exponentially distributed interarrival times. The B identifies the service distribution, e.g., M for exponential, and G for general.  The third argument denotes the number of servers. Kendall also published one of the first comprehensive bibliographies covering work on teletraffic, congestion theory, and queues. Through this bibliography-building work of Kendall and others, the new generation of operations researchers “rediscovered” the work of earlier teletraffic engineers. For example, while studying road traffic problems in the late 1950s Frank Haight came across Conny Palm’s 1937 work on queueing abandonment.

Model building took off in the decade and a half after 1945. Alan Cobham (1954) wrote one the first well known paper on priority queueing systems. Cobham was a polymath. His 1965 paper, “The intrinsic computational difficulty of functions,” was one of the first on the unrelated topic of computational complexity. A common use of priorities is to give priority to short jobs ( think express lanes in a supermarket). At heavy loads this can substantially reduce average wait time at a modest expense for long jobs. J.F.C. Kingman and Linus Schrage published work on queue service disciplines, including “First in, First Out” (FIFO) and Shortest Remaining Processing Time. Several practitioners including H. White and L.S. Christie (1958), Avi-Itzhak and Naor (1961), Miller and Gaver (1962), and Kielson and Kooharian studied server breakdown. In 1961, John D. C. Little put forward the eponymous Little’s law, which would later significantly impact operations management and computer architecture. This law states that for service systems that are, loosely speaking, stable, the average number of customers in system equals the average arrival rate times the average time in system, or in shorthand notation, N = λ*T. Lajos Takács made several contributions to time-dependent queueing processes, including queue feedbacks, priority queues, and balking. In 1956 Paul J. Burke noted that if the input to a queue is Poisson, then under certain circumstances, e.g., M/M/c, then so is the output. This observation is now called Burke’s theorem. A year later James R. Jackson, while studying ways to improve machine shop job scheduling, built on Burke’s work to develop the idea of networks of queues. This modeling work helped motivate a parallel effort to ease the increasing complexity of calculations involved. Jackson showed that there are certain classes of networks, now so-called Jackson networks, for which one can analyze each node independently to correctly compute expected wait times and other measures. Takács demonstrated the utility of combinatorics and showed how other innovations (such as Bertrand’s ballot theorem) could help solve queueing problems. A particular area of focus was in the use of transforms. In 1954, W. Lederman and G.E. Reuter, showed how spectral theory could be useful; and Norman T.J. Bailey demonstrated generating functions.

Several textbooks published during this time demonstrate a rapidly maturing field. Included among them are the first text by Philip McCord Morse (1958), and Lajos Takács (1962). In the United States, Thomas L. Saaty’s 1961 Elements of Queueing Theory, the second English-language book on the subject, became a standard, accessible textbook used to teach queueing theory up through the early 1980s. Saaty’s book is also noteworthy for including an exhaustive bibliography.

Most queueing theory publications during this era were devoted to modeling rather than applications. In 1966, Saaty noted that “real life queues are still primitive,” an assertion which prompted U. Narayan Bhat to undertake a historical review of queueing theory literature. As Bhat concluded in his 1969 article “Sixty Years of Queueing Theory,” the complex calculations involved were a chief factor contributing to queueing theory’s limited applications. In the case of road traffic queues, stochastic methods were not easily adapted to the behavior of vehicular traffic, thus dissuading practitioners from using queueing theory over other available techniques. Clients also doubted the value of large-scale queueing theory-based systems to solve for particular challenges. The 1964-1965 New York World’s Fair Corporation, for example, rejected a proposal from Arthur D. Little, Inc. to implement a crowd control system based on a system of priority queues. At the same time, however, Fair management did solicit IBM’s Service Bureau Corporation to complete a queueing theory simulation of turnstile requirements for the Fair’s main gates. John Magee, in his oral history interview, described how queueing theory was used by the Arthur D. Little  consulting firm to help American Airlines design the first computerized on-line reservations system, SABRE. Using queueing theory, Magee and his colleagues were able to show that one expensive communication modem could serve multiple reservation agents. The catalog merchant, L.L. Bean, used the M/M/c queueing model to set customer service agent staffing levels at its inbound call center, see Andrews and Parsons.

Yet the problem of finding applied uses of queueing theory may also be one of the historical record. Many published articles, for example, were concerned with demonstrating the relevance, rather than reviewing applied examples, of particular models.

Queueing Theory in the Soviet Union after 1945

In the Soviet Union, where operations research did not have a foothold as it did in the United States and the United Kingdom, the mathematical study of queues continued thanks to the efforts of Alexander Khinchin.  In Russia, queueing theory went by the name of the theory of mass service. Khinchin become a key force in sustaining the interest in the use of mathematics to solve queueing problems east of the Iron Curtain. He published a well-regarded, highly readable study in 1954, “Mathematical methods of theory of queues” in the proceedings of the Steklov Institute of Mathematics. This accessible work achieved widespread popularity in the Soviet Union, setting the paradigm for further developments of queueing analysis within the Soviet Block.

The 1970s Onward

Queueing theory’s history from the 1970s is one marked by shifting national priorities in the United States and the United Kingdom, which along with deregulation and the development of new markets created new opportunities for model building and real-world applications. Federal spending in the United States moved away from military to domestic initiatives, leading to a shift among practitioners studying military queueing problems to ones in urban service delivery, including fire suppression, emergency medical services, and law enforcement. New journals and professional societies forming after 1980 also show a strong, sustained interest in queueing theory. The 1990s saw further real-world applications of queueing theory outside of communications systems. Practitioners used Little’s law, for example, to help inform staffing level decisions at hospital emergency rooms, operations management, and computer architecture. Despite these achievements, however, as recently as 2009 some still lamented that outside demands reduce the quality of queueing models.

Model-building continued, albeit with some important changes. Perhaps the biggest shift was away from what had been a long-standing stochastic, top-down orientation. Gordon Newell’s 1971 book Applications of Queueing Theory notably took this new direction to queueing theory, stressing the use of deterministic models over stochastic ones, and diffusion approximations. In 1987, Richard Larson published “Psychology of Queueing and Social Justice,” which brought attention to how the customer experiences queues as a factor in queue design and helped define a new subfield. In 1990 Larson also proposed the idea of a “Queue Inference Engine,” which helped enable bottom-up model building through the analysis of transactional data associated with a queue’s functioning.

One of queueing theory’s greatest real-world achievements began during the 1970s, when the work of Leonard Kleinrock and his students became fundamental to the design of what would become today’s data communications technologies, and the Internet. Kleinrock began working on queueing systems in 1960, building on the work of Burke and J.R.R. Jackson. Kleinrock published a two-volume collection in 1975 and 1976 and sponsored a series of lectures to disseminate the ideas presented in the two books.

Queueing theory’s institutional supports continued to grow through the twenty-five years of the twentieth century. New journals and books also helped reshape the field. 1986 saw the first issue of the journal, Queueing Systems. In 1985, Donald Gross and Carl M. Harris released a revised edition of Fundamentals of Queueing Theory, a book that has since been widely used to teach the subject (a fourth edition was published in 2008).

The year 2009 saw the publication of Optimal Design of Queueing Systems by Shaler Stidham.  In a 1964 article, Fred Hillier introduced the idea of optimization in a queueing system to choose parameters such as the best service rate (faster processors cost more money but reduce customer wait time) or arrival rate (think of an expressway on-ramp with red-green lights to limit arrival rate).  Stidham’s book surveyed subsequent research, including the use of tolls or prices in networks of queues, the distinction between individual and system optima in queues, and how tolls can be used to achieve a system optimum, as well as applications to flow control in communication networks.

Optimal control of queueing systems extends the idea of optimization to dynamically varying service rates, arrival rates, and other variables.  Research has emphasized the application of dynamic programming among other optimization methods to establish the structure of optimal control policies, e.g., admit fewer arriving customers when the queue is longer.

The first Canadian Queueing Conference, or “CanQueue,” was held in 1999. Innovations in calculation continued, with Marcel Neuts showing how matrix methods could solve queueing problems in 1981. In the early 2000s, Joseph Abate and Ward Whitt proposed a framework for easing calculations.

In the 2000s, the U.S. journal Operations Research and the UK’s Operations Research Society celebrated their fiftieth anniversaries. In commemoration of these events both Operations Research and The Journal of Operations Research released special issues in January 2002 and May 2009, respectively, collecting articles reflecting on the field’s history. Several contributors discussed the history of queueing theory. Shaler Stidham’s chapter in the 50th anniversary issue of Operations Research includes a comprehensive survey of work done up through the year 2000 on optimal design and control of queueing systems.

Compiled by James Skee.

Edited by Linus Schrage.

Teori Antrian adalah teori dari cabang Matematika khususnya Operations Research yang berkaitan dengan antrian yang timbul karena adanya pelanggan-pelanggan yang memerlukan fasilitas pelayanan.


Many valuable applications of the queuing theory are traffic flow (vehicles, aircraft, people, communications), scheduling (patients in hospitals, jobs on machines, programs on computer), and facility design (banks, post offices, supermarkets).

Beberapa penerapan Teori Antrian tampak dalam video-video di bawah ini:








Insya Allah bermanfaat.

Tinggalkan Balasan 0

Your email address will not be published. Required fields are marked *