Javascript required
Skip to content Skip to sidebar Skip to footer

Program a Computer to Do the Same Thing Over and Over Again

Concurrent execution of multiple processes

Modern desktop operating systems are capable of handling large numbers of different processes at the same fourth dimension. This screenshot shows Linux Mint running simultaneously Xfce desktop environment, Firefox, a figurer program, the built-in calendar, Vim, GIMP, and VLC media histrion.

Multitasking of Microsoft Windows 1.01 released in 1985, here shown running the MS-DOS Executive and Computer programs

In computing, multitasking is the concurrent execution of multiple tasks (too known equally processes) over a sure period of time. New tasks can interrupt already started ones earlier they finish, instead of waiting for them to terminate. Equally a result, a calculator executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such every bit central processing units (CPUs) and main retentivity. Multitasking automatically interrupts the running program, saving its country (partial results, retentiveness contents and calculator register contents) and loading the saved state of another program and transferring control to information technology. This "context switch" may exist initiated at fixed time intervals (pre-emptive multitasking), or the running program may exist coded to point to the supervisory software when it tin can be interrupted (cooperative multitasking).

Multitasking does not crave parallel execution of multiple tasks at exactly the same time; instead, it allows more one task to accelerate over a given catamenia of time.[1] Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs.

Multitasking is a common feature of computer operating systems since at to the lowest degree the 1960s. It allows more efficient use of the estimator hardware; where a programme is waiting for some external event such every bit a user input or an input/output transfer with a peripheral to complete, the primal processor tin can still be used with some other program. In a time-sharing system, multiple human operators use the aforementioned processor as if information technology was dedicated to their use, while behind the scenes the figurer is serving many users by multitasking their private programs. In multiprogramming systems, a task runs until it must look for an external consequence or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to command industrial robots, require timely processing; a single processor might be shared between calculations of motorcar movement, communications, and user interface.[2]

Often multitasking operating systems include measures to alter the priority of individual tasks, so that of import jobs receive more processor time than those considered less significant. Depending on the operating system, a task might exist every bit large equally an entire application program, or might be made up of smaller threads that behave out portions of the overall program.

A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors.

The term "multitasking" has become an international term, as the aforementioned word is used in many other languages such as German, Italian, Dutch, Romanian, Czech, Danish and Norwegian.

Multiprogramming [edit]

In the early days of computing, CPU time was expensive, and peripherals were very slow. When the reckoner ran a program that needed access to a peripheral, the central processing unit (CPU) would take to cease executing program instructions while the peripheral processed the data. This was usually very inefficient.

The first calculator using a multiprogramming organisation was the British Leo Three endemic by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the commencement one began to run. When the commencement programme reached an instruction waiting for a peripheral, the context of this program was stored away, and the second plan in memory was given a chance to run. The process continued until all programs finished running.[3]

The employ of multiprogramming was enhanced past the arrival of virtual memory and virtual machine technology, which enabled individual programs to brand apply of retention and operating system resource every bit if other concurrently running programs were, for all practical purposes, nonexistent.[ citation needed ]

Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the beginning program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed.[4] [5]

Cooperative multitasking [edit]

Early multitasking systems used applications that voluntarily ceded time to 1 another. This approach, which was somewhen supported by many figurer operating systems, is known today every bit cooperative multitasking. Although it is at present rarely used in larger systems except for specific applications such equally CICS or the JES2 subsystem, cooperative multitasking was once the simply scheduling scheme employed by Microsoft Windows and archetype Mac Bone to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems.[vi]

Equally a cooperatively multitasked system relies on each process regularly giving upwards fourth dimension to other processes on the system, i poorly designed programme can consume all of the CPU time for itself, either by performing all-encompassing calculations or past decorated waiting; both would crusade the whole system to hang. In a server environment, this is a take chances that makes the unabridged environment unacceptably fragile.

Preemptive multitasking [edit]

Preemptive multitasking allows the computer system to more than reliably guarantee to each procedure a regular "slice" of operating time. It as well allows the organization to deal rapidly with of import external events like incoming information, which might crave the immediate attending of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-six Monitor and MULTICS in 1964, in Bone/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as minor as December'southward PDP-viii; it is a core characteristic of all Unix-like operating systems, such every bit Linux, Solaris and BSD with its derivatives,[7] also as modern versions of Windows.

At whatsoever specific fourth dimension, processes tin be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would oft "poll", or "busywait" while waiting for requested input (such every bit disk, keyboard or network input). During this time, the system was non performing useful piece of work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on concur, pending the arrival of the necessary data, allowing other processes to utilize the CPU. Every bit the arrival of the requested data would generate an interrupt, blocked processes could exist guaranteed a timely return to execution.[ commendation needed ]

The earliest preemptive multitasking Os available to home users was Sinclair QDOS on the Sinclair QL, released in 1984, simply very few people bought the machine. Commodore'southward Amiga, released the post-obit twelvemonth, was the showtime commercially successful home figurer to use the engineering science, and its multimedia abilities get in a clear ancestor of contemporary multitasking personal computers. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early on 1990s when developing Windows NT 3.1 and then Windows 95. Information technology was afterwards adopted on the Apple tree Macintosh by Mac Bone Ten that, as a Unix-like operating arrangement, uses preemptive multitasking for all native applications.

A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively.[8] 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer back up legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications.

Real time [edit]

Another reason for multitasking was in the design of real-time computing systems, where at that place are a number of peradventure unrelated external activities needed to be controlled past a unmarried processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that central activities were given a greater share of bachelor procedure time.[9]

Multithreading [edit]

As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (eastward. g., one process gathering input data, 1 procedure processing input data, 1 procedure writing out results on disk). This, even so, required some tools to allow processes to efficiently exchange data.[ commendation needed ]

Threads were built-in from the idea that the virtually efficient style for cooperating processes to commutation data would exist to share their entire retention space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching betwixt threads does not involve irresolute the memory context.[10] [11] [12]

While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that practice non provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to programme with, although they tend to lose some or all of the benefits of threads on machines with multiple processors.[thirteen]

Some systems directly support multithreading in hardware.

Retention protection [edit]

Essential to whatsoever multitasking system is to safely and effectively share admission to organisation resources. Access to memory must be strictly managed to ensure that no process tin can inadvertently or deliberately read or write to memory locations outside the process'southward address space. This is done for the purpose of general system stability and data integrity, every bit well as data security.

In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such equally a memory direction unit (MMU). If a process attempts to access a memory location outside its retentiveness space, the MMU denies the request and signals the kernel to accept appropriate actions; this usually results in forcibly terminating the offending procedure. Depending on the software and kernel design and the specific mistake in question, the user may receive an access violation error bulletin such as "segmentation fault".

In a well designed and correctly implemented multitasking system, a given process can never directly access retention that belongs to another process. An exception to this dominion is in the case of shared memory; for example, in the System V inter-process advice mechanism the kernel allocates retention to exist mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL.

Inadequate retention protection mechanisms, either due to flaws in their pattern or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software.

Retentivity swapping [edit]

Use of a swap file or swap sectionalization is a way for the operating system to provide more than memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to exist loaded at the same fourth dimension. Typically, a multitasking organization allows some other process to run when the running process hits a point where it has to wait for some portion of retentiveness to be reloaded from secondary storage.[xiv]

Programming [edit]

Processes that are entirely contained are non much problem to program in a multitasking environment. Most of the complexity in multitasking systems comes from the need to share estimator resources between tasks and to synchronize the functioning of co-operating tasks.[ citation needed ]

Various concurrent computing techniques are used to avoid potential issues caused past multiple tasks attempting to access the same resources.[ citation needed ]

Bigger systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multiprocessing.[ citation needed ]

Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.[xv]

Encounter also [edit]

  • Process state
  • Task switching

References [edit]

  1. ^ "Concurrency vs Parallelism, Concurrent Programming vs Parallel Programming". Oracle. Archived from the original on April 7, 2016. Retrieved March 23, 2016.
  2. ^ Anthony Ralston, Edwin D. Reilly (ed),Encyclopedia of Computer science Third Edition, Van Nostrand Reinhold, 1993, ISBN 0-442-27679-6, articles "Multitasking" and "Multiprogramming"
  3. ^ MASTER PROGRAME AND PROGRAMME TRIALS Arrangement PART 1 Master Program SPECIFICATION. February 1965. section half dozen "PRIORITY CONTROL ROUTINES".
  4. ^ Lithmee (2019-05-20). "What is the Departure Between Batch Processing and Multiprogramming". Pediaa.Com . Retrieved 2020-04-14 .
  5. ^ "Evolution of Operating System". 2017-09-29. Retrieved 2020-04-14 .
  6. ^ "Preemptive multitasking". riscos.info. 2009-eleven-03. Retrieved 2014-07-27 .
  7. ^ "UNIX, Part I". The Digital Enquiry Initiative. ibiblio.org. 2002-01-xxx. Retrieved 2014-01-09 .
  8. ^ Joseph Moran (June 2006). "Windows 2000 &16-Flake Applications". Smart Computing. Vol. 16, no. half-dozen. pp. 32–33. Archived from the original on January 25, 2009.
  9. ^ Liu, C. L.; Layland, James Due west. (1973-01-01). "Scheduling Algorithms for Multiprogramming in a Difficult-Real-Time Environment". Journal of the ACM. 20 (1): 46–61. doi:ten.1145/321738.321743. ISSN 0004-5411.
  10. ^ Eduardo Ciliendo; Takechika Kunimasa (April 25, 2008). "Linux Performance and Tuning Guidelines" (PDF). redbooks.ibm.com. IBM. p. 4. Archived from the original (PDF) on February 26, 2015. Retrieved March one, 2015.
  11. ^ "Context Switch Definition". linfo.org. May 28, 2006. Archived from the original on February 18, 2010. Retrieved February 26, 2015.
  12. ^ "What are threads (user/kernel)?". tldp.org. September viii, 1997. Retrieved February 26, 2015.
  13. ^ Multitasking unlike methods Accessed on Feb nineteen, 2019
  14. ^ "What is a swap file?". kb.iu.edu . Retrieved 2018-03-26 .
  15. ^ "Operating Systems Compages". cis2.oc.ctc.edu . Retrieved 2018-03-17 .

bowmakerchattent98.blogspot.com

Source: https://en.wikipedia.org/wiki/Computer_multitasking